Search results for: data personalization
21844 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 48521843 Case-Based Reasoning: A Hybrid Classification Model Improved with an Expert's Knowledge for High-Dimensional Problems
Authors: Bruno Trstenjak, Dzenana Donko
Abstract:
Data mining and classification of objects is the process of data analysis, using various machine learning techniques, which is used today in various fields of research. This paper presents a concept of hybrid classification model improved with the expert knowledge. The hybrid model in its algorithm has integrated several machine learning techniques (Information Gain, K-means, and Case-Based Reasoning) and the expert’s knowledge into one. The knowledge of experts is used to determine the importance of features. The paper presents the model algorithm and the results of the case study in which the emphasis was put on achieving the maximum classification accuracy without reducing the number of features.Keywords: case based reasoning, classification, expert's knowledge, hybrid model
Procedia PDF Downloads 36721842 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction
Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter
Abstract:
Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA
Procedia PDF Downloads 18021841 Scaling up Potato Economic Opportunities: Evaluation of Youths Participation in Potato Value Chain in Nigeria
Authors: Chigozirim N. Onwusiribe, Jude A. Mbanasor
Abstract:
The potato value chain when harnessed can engage numerous youths and aid in the fight against poverty, malnutrition and unemployment. This study seeks to evaluate the level of youth participation in the potato value chain in Nigeria. Specifically, this study will examine the extent of youth participation in potato value chain, analyze the cost, benefits and sustainability of youth participation in the potato value chain, identify the factors that can propel or hinder youth participation in the potato value chain and make recommendations that will result in the increase in youth employment in the potato value chain. This study was conducted in the North Central and South East geopolitical zones of Nigeria. A multi stage sampling procedure was used to select 540 youths from the study areas. Focused group discussions and survey approach was used to elicit the required data. The data were analyzed using statistical and econometric tools. The study revealed that the potato value chain is very profitable.Keywords: value, chain, potato, youth, enterprise
Procedia PDF Downloads 15621840 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 10621839 Saltwater Intrusion Studies in the Cai River in the Khanh Hoa Province, Vietnam
Authors: B. Van Kessel, P. T. Kockelkorn, T. R. Speelman, T. C. Wierikx, C. Mai Van, T. A. Bogaard
Abstract:
Saltwater intrusion is a common problem in estuaries around the world, as it could hinder the freshwater supply of coastal zones. This problem is likely to grow due to climate change and sea-level rise. The influence of these factors on the saltwater intrusion was investigated for the Cai River in the Khanh Hoa province in Vietnam. In addition, the Cai River has high seasonal fluctuations in discharge, leading to increased saltwater intrusion during the dry season. Sea level rise, river discharge changes, river mouth widening and a proposed saltwater intrusion prevention dam can have influences on the saltwater intrusion but have not been quantified for the Cai River estuary. This research used both an analytical and numerical model to investigate the effect of the aforementioned factors. The analytical model was based on a model proposed by Savenije and was calibrated using limited in situ data. The numerical model was a 3D hydrodynamic model made using the Delft3D4 software. The analytical model and numerical model agreed with in situ data, mostly for tidally average data. Both models indicated a roughly similar dependence on discharge, also agreeing that this parameter had the most severe influence on the modeled saltwater intrusion. Especially for discharges below 10 m/s3, the saltwater was predicted to reach further than 10 km. In the models, both sea-level rise and river widening mainly resulted in salinity increments up to 3 kg/m3 in the middle part of the river. The predicted sea-level rise in 2070 was simulated to lead to an increase of 0.5 km in saltwater intrusion length. Furthermore, the effect of the saltwater intrusion dam seemed significant in the model used, but only for the highest position of the gate.Keywords: Cai River, hydraulic models, river discharge, saltwater intrusion, tidal barriers
Procedia PDF Downloads 11221838 Content Analysis and Attitude of Thai Students towards Thai Series “Hormones: Season 2”
Authors: Siriporn Meenanan
Abstract:
The objective of this study is to investigate the attitude of Thai students towards the Thai series "Hormones the Series Season 2". This study was conducted in the quantitative research, and the questionnaires were used to collect data from 400 people of the sample group. Descriptive statistics were used in data analysis. The findings reveal that most participants have positive comments regarding the series. They strongly agreed that the series reflects on the way of life and problems of teenagers in Thailand. Hence, the participants believe that if adults have a chance to watch the series, they will have the better understanding of the teenagers. In addition, the participants also agreed that the contents of the play are appropriate and satisfiable as the contents of “Hormones the Series Season 2” will raise awareness among the teens and use it as a guide to prevent problems that might happen during their teenage life.Keywords: content analysis, attitude, Thai series, hormones the Series
Procedia PDF Downloads 22921837 Using Collaborative Pictures to Understand Student Experience
Authors: Tessa Berg, Emma Guion Akdag
Abstract:
Summative feedback forms are used in academia for gathering data on course quality and student understanding. Students answer a series of questions based on the course they are soon to finish in these forms. Feedback forms are notorious for being homogenised and limiting and thus the data captured is often neutral and lacking in tacit emotional responses. This paper contrasts student feedback forms with collaborative drawing. We analyse 19 pictures drawn by international students on a pre-sessional course. Through visuals we present an approach to enable a holistic level of student understanding. Visuals communicate irrespective of possible language, cultural and educational barriers. This paper sought to discover if the pictures mirrored the feedback given on a typical feedback form. Findings indicate a considerable difference in the two approaches and thus we highlight the value of collaborative drawing as a complimentary resource to aid the understanding of student experience.Keywords: feedback forms, visualisation, student experience, collaborative drawing
Procedia PDF Downloads 34521836 Health Trajectory Clustering Using Deep Belief Networks
Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour
Abstract:
We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.Keywords: health trajectory, clustering, deep learning, DBN
Procedia PDF Downloads 36921835 Harnessing Emerging Creative Technology for Knowledge Discovery of Multiwavelenght Datasets
Authors: Basiru Amuneni
Abstract:
Astronomy is one domain with a rise in data. Traditional tools for data management have been employed in the quest for knowledge discovery. However, these traditional tools become limited in the face of big. One means of maximizing knowledge discovery for big data is the use of scientific visualisation. The aim of the work is to explore the possibilities offered by emerging creative technologies of Virtual Reality (VR) systems and game engines to visualize multiwavelength datasets. Game Engines are primarily used for developing video games, however their advanced graphics could be exploited for scientific visualization which provides a means to graphically illustrate scientific data to ease human comprehension. Modern astronomy is now in the era of multiwavelength data where a single galaxy for example, is captured by the telescope several times and at different electromagnetic wavelength to have a more comprehensive picture of the physical characteristics of the galaxy. Visualising this in an immersive environment would be more intuitive and natural for an observer. This work presents a standalone VR application that accesses galaxy FITS files. The application was built using the Unity Game Engine for the graphics underpinning and the OpenXR API for the VR infrastructure. The work used a methodology known as Design Science Research (DSR) which entails the act of ‘using design as a research method or technique’. The key stages of the galaxy modelling pipeline are FITS data preparation, Galaxy Modelling, Unity 3D Visualisation and VR Display. The FITS data format cannot be read by the Unity Game Engine directly. A DLL (CSHARPFITS) which provides a native support for reading and writing FITS files was used. The Galaxy modeller uses an approach that integrates cleaned FITS image pixels into the graphics pipeline of the Unity3d game Engine. The cleaned FITS images are then input to the galaxy modeller pipeline phase, which has a pre-processing script that extracts, pixel, galaxy world position, and colour maps the FITS image pixels. The user can visualise image galaxies in different light bands, control the blend of the image with similar images from different sources or fuse images for a holistic view. The framework will allow users to build tools to realise complex workflows for public outreach and possibly scientific work with increased scalability, near real time interactivity with ease of access. The application is presented in an immersive environment and can use all commercially available headset built on the OpenXR API. The user can select galaxies in the scene, teleport to the galaxy, pan, zoom in/out, and change colour gradients of the galaxy. The findings and design lessons learnt in the implementation of different use cases will contribute to the development and design of game-based visualisation tools in immersive environment by enabling informed decisions to be made.Keywords: astronomy, visualisation, multiwavelenght dataset, virtual reality
Procedia PDF Downloads 9221834 Validation of a Fluid-Structure Interaction Model of an Aortic Dissection versus a Bench Top Model
Authors: K. Khanafer
Abstract:
The aim of this investigation was to validate the fluid-structure interaction (FSI) model of type B aortic dissection with our experimental results from a bench-top-model. Another objective was to study the relationship between the size of a septectomy that increases the outflow of the false lumen and its effect on the values of the differential of pressure between true lumen and false lumen. FSI analysis based on Galerkin’s formulation was used in this investigation to study flow pattern and hemodynamics within a flexible type B aortic dissection model using boundary conditions from our experimental data. The numerical results of our model were verified against the experimental data for various tear size and location. Thus, CFD tools have a potential role in evaluating different scenarios and aortic dissection configurations.Keywords: aortic dissection, fluid-structure interaction, in vitro model, numerical
Procedia PDF Downloads 27121833 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN
Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo
Abstract:
This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.Keywords: PM2.5 forecast, machine learning, convLSTM, DNN
Procedia PDF Downloads 5421832 Revealing the Potential of Geotourism and Geoheritage of Gedangsari Area, Yogyakarta
Authors: Cecilia Jatu, Adventino
Abstract:
Gedangsari is located in Gunungkidul, Yogyakarta Province, which has several criteria to be used as a new geosite object. The research area is located in the southern mountain zone of Java, composed of 5 rock formations with Oligocene up to Middle Miocene age. The purpose of this study is to reveal the potential of geotourism and the geoheritage to be proposed as a new geosite and to make a geosite map of Gedangsari. The research method used is descriptive data collection and which includes quantitative geological data collection, geotourism, and heritage sites, then supported by petrographic analysis, geological structure, geological mapping, and SWOT analysis. The geological data proved that Gedangsari consists of igneous rock (intrusion), pyroclastic rock, and sediment rock. This condition caused many varieties and particular geomorphological platform. Geotourism that include in Gedangsari are Luweng Sampang Canyon, Gedangsari Bouma Sequence, Watugajah Columnar Joint, Gedangsari Marine Fan Sediment, and Tegalrejo Waterfall. There is also Tegalrejo Village, which can be considered as geoheritage site because of its culture and batik traditional cloth. The results of the SWOT analysis, Gedangsari geosite must be developed and appropriately promoted in order to improve the existence. The development of geosite area will have a significant impact that improve the economic growth of the surrounding community and can be used by the government as base information for sustainable development. In addition, the making of an educational map about the geological conditions and geotourism location of the Gedangsari geosite can increase the people's knowledge about Gedangsari.Keywords: Gedangsari, geoheritage, geotourism, geosite
Procedia PDF Downloads 12221831 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 9921830 Fostering Resilience in Early Adolescents: A Canadian Evaluation of the HEROES Program
Authors: Patricia L. Fontanilla, David Nordstokke
Abstract:
Introduction: Today’s children and youth face increasing social and behavioural challenges, leading to delays in social development and greater mental health needs. Early adolescents (aged 9 to 14) are experiencing a rise in mental health symptoms and diagnoses. This study examines the impact of HEROES, a social-emotional learning (SEL) program, on resilience and academic outcomes in early adolescents. The HEROES program is designed to enhance resilience the ability to adapt and thrive in the face of adversity, equipping youth to navigate developmental transitions and challenges. This study’s objective was to evaluate the program’s long-term effectiveness by measuring changes in resilience and academic resilience across 10 months. Methodology: This study collected data from 21 middle school students (grades 7 to 9) in a rural Canadian school. Quantitative data were gathered at four intervals: pre-intervention, post-intervention, and at 2- and 4-month follow-ups. Data were analyzed with linear mixed models (LMM). Results: Findings showed statistically significant increases in academic resilience over time and significant increases in resilience from pre-intervention to 2 and 4 months later. Limitations included a small sample size, which may affect generalizability. Conclusion: The HEROES program demonstrates promise in increasing resilience and academic resilience among early adolescents through SEL skill development.Keywords: academic resilience, early adolescence, resilience, SEL, social-emotional learning program
Procedia PDF Downloads 1121829 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Authors: Yang Zheng, Wei Sun
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b → 0) and plane strain (b → ∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: bending, creep, thin plate, materials engineering
Procedia PDF Downloads 47421828 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 6821827 Production Increase of C-Central Wells Baher Essalm-Libya
Authors: Emed Krekshi, Walid Ben Husein
Abstract:
The Bahr Essalam gas-condensate field is located off the Libyan coast and is currently being produced by Mellitah Oil and Gas (MOG). Gas and condensate are produced from the Bahr Essalam reservoir through a mixture of platform and subsea wells, with the subsea wells being gathered at the western manifolds and delivered to the Sabratha platform via a 22-inch pipeline. Gas is gathered and dehydrated on the Sabratha platform and then delivered to the Mellitah gas plant via an existing 36-inch gas export pipeline. The condensate separated on the Sabratha platform will be delivered to the Mellitah gas plant via an existing 10-inch export pipeline. The Bahr Essalam Phase II project includes 2 production wells (CC16 & CC17) at C-Central A connected to the Sabratha platform via a new 10.9 km long 10”/14” production pipeline. Production rates from CC16 and CC17 have exceeded the maximum planned rate of 40 MMSCFD per well. A hydrothermal analysis was conducted to review and Verify input data, focusing on the variation of flowing well head as a function of flowrate.as well as Review available input data against the previous design input data to determine the extent of change. The steady-state and transient simulations performed with Olga yielded coherent results and confirmed the possibility of achieving flow rates of up to 60MMSCFD per well without exceeding the design temperatures, pressures, and velocities.Keywords: Bahr Essalam, Mellitah Oil and Gas, production flow rates, steady and transient
Procedia PDF Downloads 5821826 The Effect of Change Communication towards Commitment to Change through the Role of Organizational Trust
Authors: Enno R. Farahzehan, Wustari L. Mangundjaya
Abstract:
Organizational change is necessary to develop innovation and to compete with other competitors. Organizational changes were also made to defend the existence of the organization itself. Success in implementing organizational change consists of a variety of factors, one of which is individual (employee) who run changes. The employee must have the willingness and ability in carrying out the changes. Besides, employees must also have a commitment to change for creation of the successful organizational change. This study aims to execute the effect of change communication towards commitment to change through the role of organizational trust. The respondents of this study were employees who work in organizations, which have been or are currently running organizational changes. The data were collected using Change Communication, Commitment to Change, and Organizational Trust Inventory. The data were analyzed using regression. The result showed that there is an effect among change communication towards commitment to change which is higher when mediated by organizational trust. This paper will contribute to the knowledge and implications of organizational change, that shows change communication can affect commitment to change among employee if there is trust in the organization.Keywords: change communication, commitment to change, organizational trust, organizational change
Procedia PDF Downloads 34221825 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 24621824 3d Property Modelling of the Lower Acacus Reservoir, Ghadames Basin, Libya
Authors: Aimen Saleh
Abstract:
The Silurian Lower Acacus sandstone is one of the main reservoirs in North West Libya. Our aim in this study is to grasp a robust understanding of the hydrocarbon potential and distribution in the area. To date, the depositional environment of the Lower Acacus reservoir still open to discussion and contradiction. Henceforth, building three dimensional (3D) property modelling is one way to support the analysis and description of the reservoir, its properties and characterizations, so this will be of great value in this project. The 3D model integrates different data set, these incorporates well logs data, petrophysical reservoir properties and seismic data as well. The finalized depositional environment model of the Lower Acacus concludes that the area is located in a deltaic transitional depositional setting, which ranges from a wave dominated delta into tide dominated delta type. This interpretation carried out through a series of steps of model generation, core description and Formation Microresistivity Image tool (FMI) interpretation. After the analysis of the core data, the Lower Acacus layers shows a strong effect of tidal energy. Whereas these traces found imprinted in different types of sedimentary structures, for examples; presence of some crossbedding, such as herringbones structures, wavy and flaser cross beddings. In spite of recognition of some minor marine transgression events in the area, on the contrary, the coarsening upward cycles of sand and shale layers in the Lower Acacus demonstrate presence of a major regressive phase of the sea level. However, consequently, we produced a final package of this model in a complemented set of facies distribution, porosity and oil presence. And also it shows the record of the petroleum system, and the procedure of Hydrocarbon migration and accumulation. Finally, this model suggests that the area can be outlined into three main segments of hydrocarbon potential, which can be a textbook guide for future exploration and production strategies in the area.Keywords: Acacus, Ghadames , Libya, Silurian
Procedia PDF Downloads 14321823 Variations in Heat and Cold Waves over Southern India
Authors: Amit G. Dhorde
Abstract:
It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.Keywords: heat wave, cold wave, southern India, decadal frequency
Procedia PDF Downloads 12821822 Study of the Effect of Inclusion of TiO2 in Active Flux on Submerged Arc Welding of Low Carbon Mild Steel Plate and Parametric Optimization of the Process by Using DEA Based Bat Algorithm
Authors: Sheetal Kumar Parwar, J. Deb Barma, A. Majumder
Abstract:
Submerged arc welding is a very complex process. It is a very efficient and high performance welding process. In this present study an attempt have been done to reduce the welding distortion by increased amount of oxide flux through TiO2 in submerged arc welding process. Care has been taken to avoid the excessiveness of the adding agent for attainment of significant results. Data Envelopment Analysis (DEA) based BAT algorithm is used for the parametric optimization purpose in which DEA Data Envelopment Analysis is used to convert multi response parameters into a single response parameter. The present study also helps to know the effectiveness of the addition of TiO2 in active flux during submerged arc welding process.Keywords: BAT algorithm, design of experiment, optimization, submerged arc welding
Procedia PDF Downloads 63921821 Introducing a Proper Total Quality Management Model for Libraries
Authors: Alireza Shahraki, Kaveh Keshmiry Zadeh
Abstract:
Total quality management in libraries is of particular importance because high-quality libraries can facilitate the sustained development process in countries. This study has been conducted to examine the feasibility of implementation of total quality management in libraries of Sistan and Baluchestan and to provide an appropriate model for this concern. All of the officials and employees of Sistan and Baluchestan libraries (23 individuals) constitute the population of the study. Data gathering tool is a questionnaire that is designated based on ISO9000. The data extracted from questionnaires were analyzed using SPSS software. Results indicate that the highest degree of conformance to the 8 principles of ISO9000 is attributed to the principle of 'users' (69.9%) and the lowest degree is associated with 'decision making based on facts' (39.1%). Moreover, a significant relationship was observed among the items (1 and 3), (2 and 5), (2 and 7), (3 and 5), (4 and 5), (4 and 7), (4 and 8), (5 and 7), and (7 and 8). According to the research findings, it can generally be said that it is not eligible now to utilize TQM in libraries of Sistan and Baluchestan.Keywords: quality management, total quality, university libraries, libraries management
Procedia PDF Downloads 34021820 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 16421819 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain
Authors: Zachary Blanks, Solomon Sonya
Abstract:
Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection
Procedia PDF Downloads 29221818 Modelling High-Frequency Crude Oil Dynamics Using Affine and Non-Affine Jump-Diffusion Models
Authors: Katja Ignatieva, Patrick Wong
Abstract:
We investigated the dynamics of high frequency energy prices, including crude oil and electricity prices. The returns of underlying quantities are modelled using various parametric models such as stochastic framework with jumps and stochastic volatility (SVCJ) as well as non-parametric alternatives, which are purely data driven and do not require specification of the drift or the diffusion coefficient function. Using different statistical criteria, we investigate the performance of considered parametric and nonparametric models in their ability to forecast price series and volatilities. Our models incorporate possible seasonalities in the underlying dynamics and utilise advanced estimation techniques for the dynamics of energy prices.Keywords: stochastic volatility, affine jump-diffusion models, high frequency data, model specification, markov chain monte carlo
Procedia PDF Downloads 10421817 Educational Framework for Coaches on Injury Prevention in Adolescent Team Sports
Authors: Chantell Gouws, Lourens Millard, Anne Naude, Jan-Wessel Meyer, Brandon Stuwart Shaw, Ina Shaw
Abstract:
Background: Millions of South African youths participate in team sports, with netball and rugby being two of the largest worldwide. This increased participation and professionalism have resulted in an increase in the number of musculoskeletal injuries. Objective: This study examined the extent to which sport coaching knowledge translates to the injuries and prevention of injuries in adolescents participating in netball and rugby. Methods: Thirty-four South African sports coaches participated in the study. Eighteen netball coaches and 16 rugby coaches with varying levels of coaching experience were selected to participate. An adapted version of Nash and Sproule’s questionnaire was used to investigate the coaches’ knowledge with regards to sport-specific common injuries, injury prevention, fitness/conditioning, individual technique development, training programs, mental training, and preparation of players. The analysis of data was carried out using a number of different techniques outlined by Nash and Sproule (2012). These techniques were determined by the type of data. Descriptive data was used to provide statistical analysis. Quantitative data was used to determine the educational framework and knowledge of sports coaches on injury prevention. Numerical data was obtained through questions on sports injuries, as well as coaches’ sports knowledge levels. Participants’ knowledge was measured using a standardized scoring system. Results: For the 0-4 years of netball coaching experience, 76.4% of the coaches had knowledge and experience and 33.3% appropriate first aid knowledge, while for the 9-12 years and 13-16 years, 100% of the coaches had knowledge and experience and first aid knowledge. For the 0-4 years in rugby coaching experience, 59.1% had knowledge and experience and 71% the appropriate first aid knowledge; for the 17-20 years, 100% had knowledge and experience and first aid, while for higher or equal to 25 years, 45.5% had knowledge and experience. In netball, 90% of injuries consisted of ankle injuries, followed by 70% for knee, 50% for shoulder, 20% for lower leg, and 15% for finger injuries. In rugby, 81% of the injuries occurred at the knee, followed by 50% for the shoulder, 40% for the ankle, 31% for the head and neck, and 25% for hamstring injuries. Six hours of training resulted in a 13% chance of injuries in netball and a 32% chance in rugby. For 10 hours of training, the injury prevalence was 10% in netball and 17% in rugby, while 15 hours resulted in an injury incidence of 58% in netball players and a 25% chance in rugby players. Conclusion: This study highlights the need for coaches to improve their knowledge in relation to injuries and injury prevention, along with factors that act as a preventative measure and promotes players’ well-being.Keywords: musculoskeletal injury, sport coaching, sport trauma
Procedia PDF Downloads 16121816 Tourists' Percepion of Osun Osogbo Festival in Osogbo, Osun State Nigeria
Authors: Yina Donald Orga
Abstract:
Osun Osogbo festival is one of the biggest art festivals in Nigeria with over 235, 518 tourist visits in 2014. The purpose of this study is to generate data on the tourists’ perception of Osun Osogbo Festival in Osogbo, Osun State Nigeria. Based on the population of 199, 860 tourist visits at Osun Osogbo festival in 2013, Krejcie and Morgan sample size table was used to select 768 tourists/respondents. Likert questionnaire were used to elicit data from the respondents. Descriptive statistic was used to describe the characteristics of respondents and analyse the tourists’ perception of the festival. The findings from data analysed suggest that the trend of domestic and international tourist visits in the past ten years for the festival had shown a consistent increase since 2004 except in 2007 and 2008 and continue to increase up to 2013. This is an indication that the tourists are satisfied with traditional, historical and authenticity features of the festival. Also, findings from the study revealed that the tourists are not satisfied with the number of toilets at Osun Sacred Grove, crowd control of visitors during the festival, medical personnel to cater for visitors during the festival, etc. In view of the findings of the study, the following recommendations are suggested; provision of more toilets at Osun Sacred grove, Osogbo Heritage Council to recruit festival guides to help control the huge crowd at the festival, the Government of State of Osun in conjunction with Red Cross Society should engage adequate medical personnel to cater for medical needs of visitors at the festival, etc.Keywords: festival, perception, positive, tourists
Procedia PDF Downloads 20621815 End To End Process to Automate Batch Application
Authors: Nagmani Lnu
Abstract:
Often, Quality Engineering refers to testing the applications that either have a User Interface (UI) or an Application Programming Interface (API). We often find mature test practices, standards, and automation regarding UI or API testing. However, another kind is present in almost all types of industries that deal with data in bulk and often get handled through something called a Batch Application. This is primarily an offline application companies develop to process large data sets that often deal with multiple business rules. The challenge gets more prominent when we try to automate batch testing. This paper describes the approaches taken to test a Batch application from a Financial Industry to test the payment settlement process (a critical use case in all kinds of FinTech companies), resulting in 100% test automation in Test Creation and Test execution. One can follow this approach for any other batch use cases to achieve a higher efficiency in their testing process.Keywords: batch testing, batch test automation, batch test strategy, payments testing, payments settlement testing
Procedia PDF Downloads 60