Search results for: data interpolating empirical orthogonal function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29462

Search results for: data interpolating empirical orthogonal function

25892 Science and Technology as Contemporary Epistemological Conditions of Literature

Authors: Lin Zou

Abstract:

This paper explores how the development of science and technology in the recent decades has created new conditions for literature and aesthetics. These are epistemological conditions that not only offer empirical understandings of the human mentality, behavior, emotions, and humanity in general, but reshape how value and the ontological questions are understood and linked with humanity. This paper will discuss the implications of these epistemological conditions for the depiction and interpretation of human subjectivity in literature. The paper will first seek to present the argument that science and technology have created new conditions for literature and aesthetics. It then outlines the implications of these new conditions for literature and aesthetics. The main methodologies used are close reading and case studies.

Keywords: epistemological conditions, literature and aesthetics, science and technology, subjectivity

Procedia PDF Downloads 103
25891 Generating Insights from Data Using a Hybrid Approach

Authors: Allmin Susaiyah, Aki Härmä, Milan Petković

Abstract:

Automatic generation of insights from data using insight mining systems (IMS) is useful in many applications, such as personal health tracking, patient monitoring, and business process management. Existing IMS face challenges in controlling insight extraction, scaling to large databases, and generalising to unseen domains. In this work, we propose a hybrid approach consisting of rule-based and neural components for generating insights from data while overcoming the aforementioned challenges. Firstly, a rule-based data 2CNL component is used to extract statistically significant insights from data and represent them in a controlled natural language (CNL). Secondly, a BERTSum-based CNL2NL component is used to convert these CNLs into natural language texts. We improve the model using task-specific and domain-specific fine-tuning. Our approach has been evaluated using statistical techniques and standard evaluation metrics. We overcame the aforementioned challenges and observed significant improvement with domain-specific fine-tuning.

Keywords: data mining, insight mining, natural language generation, pre-trained language models

Procedia PDF Downloads 96
25890 Review of K0-Factors and Related Nuclear Data of the Selected Radionuclides for Use in K0-NAA

Authors: Manh-Dung Ho, Van-Giap Pham, Van-Doanh Ho, Quang-Thien Tran, Tuan-Anh Tran

Abstract:

The k0-factors and related nuclear data, i.e. the Q0-factors and effective resonance energies (Ēr) of the selected radionuclides which are used in the k0-based neutron activation analysis (k0-NAA), were critically reviewed to be integrated in the “k0-DALAT” software. The k0- and Q0-factors of some short-lived radionuclides: 46mSc, 110Ag, 116m2In, 165mDy, and 183mW, were experimentally determined at the Dalat research reactor. The other radionuclides selected are: 20F, 36S, 49Ca, 60mCo, 60Co, 75Se, 77mSe, 86mRb, 115Cd, 115mIn, 131Ba, 134mCs, 134Cs, 153Gd, 153Sm, 159Gd, 170Tm, 177mYb, 192Ir, 197mHg, 239U and 239Np. The reviewed data as compared with the literature data were biased within 5.6-7.3% in which the experimental re-determined factors were within 6.1 and 7.3%. The NIST standard reference materials: Oyster Tissue (1566b), Montana II Soil (2711a) and Coal Fly Ash (1633b) were used to validate the new reviewed data showing that the new data gave an improved k0-NAA using the “k0-DALAT” software with a factor of 4.5-6.8% for the investigated radionuclides.

Keywords: neutron activation analysis, k0-based method, k0 factor, Q0 factor, effective resonance energy

Procedia PDF Downloads 113
25889 Modeling of a UAV Longitudinal Dynamics through System Identification Technique

Authors: Asadullah I. Qazi, Mansoor Ahsan, Zahir Ashraf, Uzair Ahmad

Abstract:

System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc.  This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error   technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.

Keywords: fixed wing UAV, system identification, black box modeling, longitudinal dynamics, least square error

Procedia PDF Downloads 313
25888 Optimizing Electric Vehicle Charging with Charging Data Analytics

Authors: Tayyibah Khanam, Mohammad Saad Alam, Sanchari Deb, Yasser Rafat

Abstract:

Electric vehicles are considered as viable replacements to gasoline cars since they help in reducing harmful emissions and stimulate power generation through renewable energy sources, hence contributing to sustainability. However, one of the significant obstacles in the mass deployment of electric vehicles is the charging time anxiety among users and, thus, the subsequent large waiting times for available chargers at charging stations. Data analytics, on the other hand, has revolutionized the decision-making tasks of management and operating systems since its arrival. In this paper, we attempt to optimize the choice of EV charging stations for users in their vicinity by minimizing the time taken to reach the charging stations and the waiting times for available chargers. Time taken to travel to the charging station is calculated by the Google Maps API and the waiting times are predicted by polynomial regression of the historical data stored. The proposed framework utilizes real-time data and historical data from all operating charging stations in the city and assists the user in finding the best suitable charging station for their current situation and can be implemented in a mobile phone application. The algorithm successfully predicts the most optimal choice of a charging station and the minimum required time for various sample data sets.

Keywords: charging data, electric vehicles, machine learning, waiting times

Procedia PDF Downloads 174
25887 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation

Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou

Abstract:

An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures

Procedia PDF Downloads 487
25886 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data

Authors: R. Shamsi, F. Sharifi

Abstract:

In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.

Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis

Procedia PDF Downloads 98
25885 Multilayer Thermal Screens for Greenhouse Insulation

Authors: Clara Shenderey, Helena Vitoshkin, Mordechai Barak, Avraham Arbel

Abstract:

Greenhouse cultivation is an energy-intensive process due to the high demands on cooling or heating according to external climatic conditions, which could be extreme in the summer or winter seasons. The thermal radiation rate inside a greenhouse depends mainly on the type of covering material and greenhouse construction. Using additional thermal screens under a greenhouse covering combined with a dehumidification system improves the insulation and could be cost-effective. Greenhouse covering material usually contains protective ultraviolet (UV) radiation additives to prevent the film wear, insect harm, and crop diseases. This paper investigates the overall heat transfer coefficient, or U-value, for greenhouse polyethylene covering contains UV-additives and glass covering with or without a thermal screen supplement. The hot-box method was employed to evaluate overall heat transfer coefficients experimentally as a function of the type and number of the thermal screens. The results show that the overall heat transfer coefficient decreases with increasing the number of thermal screens as a hyperbolic function. The overall heat transfer coefficient highly depends on the ability of the material to reflect thermal radiation. Using a greenhouse covering, i.e., polyethylene films or glass, in combination with high reflective thermal screens, i.e., containing about 98% of aluminum stripes or aluminum foil, the U-value reduces by 61%-89% in the first case, whereas by 70%-92% in the second case, depending on the number of the thermal screen. Using thermal screens made from low reflective materials may reduce the U-value by 30%-57%. The heat transfer coefficient is an indicator of the thermal insulation properties of the materials, which allows farmers to make decisions on the use of appropriate thermal screens depending on the external and internal climate conditions in a greenhouse.

Keywords: energy-saving thermal screen, greenhouse cover material, heat transfer coefficient, hot box

Procedia PDF Downloads 133
25884 Computational Aided Approach for Strut and Tie Model for Non-Flexural Elements

Authors: Mihaja Razafimbelo, Guillaume Herve-Secourgeon, Fabrice Gatuingt, Marina Bottoni, Tulio Honorio-De-Faria

Abstract:

The challenge of the research is to provide engineering with a robust, semi-automatic method for calculating optimal reinforcement for massive structural elements. In the absence of such a digital post-processing tool, design office engineers make intensive use of plate modelling, for which automatic post-processing is available. Plate models in massive areas, on the other hand, produce conservative results. In addition, the theoretical foundations of automatic post-processing tools for reinforcement are those of reinforced concrete beam sections. As long as there is no suitable alternative for automatic post-processing of plates, optimal modelling and a significant improvement of the constructability of massive areas cannot be expected. A method called strut-and-tie is commonly used in civil engineering, but the result itself remains very subjective to the calculation engineer. The tool developed will facilitate the work of supporting the engineers in their choice of structure. The method implemented consists of defining a ground-structure built on the basis of the main constraints resulting from an elastic analysis of the structure and then to start an optimization of this structure according to the fully stressed design method. The first results allow to obtain a coherent return in the first network of connecting struts and ties, compared to the cases encountered in the literature. The evolution of the tool will then make it possible to adapt the obtained latticework in relation to the cracking states resulting from the loads applied during the life of the structure, cyclic or dynamic loads. In addition, with the constructability constraint, a final result of reinforcement with an orthogonal arrangement with a regulated spacing will be implemented in the tool.

Keywords: strut and tie, optimization, reinforcement, massive structure

Procedia PDF Downloads 132
25883 Integrated Model for Enhancing Data Security Processing Time in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a simple user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud computing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 343
25882 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia

Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza

Abstract:

In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.

Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant

Procedia PDF Downloads 458
25881 Investigating the Editing's Effect of Advertising Photos on the Virtual Purchase Decision Based on the Quantitative Electroencephalogram (EEG) Parameters

Authors: Parya Tabei, Maryam Habibifar

Abstract:

Decision-making is an important cognitive function that can be defined as the process of choosing an option among available options to achieve a specific goal. Consumer ‘need’ is the main reason for purchasing decisions. Human decision-making while buying products online is subject to various factors, one of which is the quality and effect of advertising photos. Advertising photo editing can have a significant impact on people's virtual purchase decisions. This technique helps improve the quality and overall appearance of photos by adjusting various aspects such as brightness, contrast, colors, cropping, resizing, and adding filters. This study, by examining the effect of editing advertising photos on the virtual purchase decision using EEG data, tries to investigate the effect of edited images on the decision-making of customers. A group of 30 participants were asked to react to 24 edited and unedited images while their EEG was recorded. Analysis of the EEG data revealed increased alpha wave activity in the occipital regions (O1, O2) for both edited and unedited images, which is related to visual processing and attention. Additionally, there was an increase in beta wave activity in the frontal regions (FP1, FP2, F4, F8) when participants viewed edited images, suggesting involvement in cognitive processes such as decision-making and evaluating advertising content. Gamma wave activity also increased in various regions, especially the frontal and parietal regions, which are associated with higher cognitive functions, such as attention, memory, and perception, when viewing the edited images. While the visual processing reflected by alpha waves remained consistent across different visual conditions, editing advertising photos appeared to boost neural activity in frontal and parietal regions associated with decision-making processes. These Findings suggest that photo editing could potentially influence consumer perceptions during virtual shopping experiences by modulating brain activity related to product assessment and purchase decisions.

Keywords: virtual purchase decision, advertising photo, EEG parameters, decision Making

Procedia PDF Downloads 28
25880 The Chemical Transport Mechanism of Emitter Micro-Particles in Tungsten Electrode: A Metallurgical Study

Authors: G. Singh, H.Schuster, U. Füssel

Abstract:

The stability of electric arc and durability of electrode tip used in Tungsten Inert Gas (TIG) welding demand a metallurgical study about the chemical transport mechanism of emitter oxide particles in tungsten electrode during its real welding conditions. The tungsten electrodes doped with emitter oxides of rare earth oxides such as La₂O₃, Th₂O₃, Y₂O₃, CeO₂ and ZrO₂ feature a comparatively lower work function than tungsten and thus have superior emission characteristics due to lesser surface temperature of the cathode. The local change in concentration of these emitter particles in tungsten electrode due to high temperature diffusion (chemical transport) can change its functional properties like electrode temperature, work function, electron emission, and stability of the electrode tip shape. The resulting increment in tip surface temperature results in the electrode material loss. It was also observed that the tungsten recrystallizes to large grains at high temperature. When the shape of grain boundaries are granular in shape, the intergranular diffusion of oxide emitter particles takes more time to reach the electrode surface. In the experimental work, the microstructure of the used electrode's tip surface will be studied by scanning electron microscope and reflective X-ray technique in order to gauge the extent of the diffusion and chemical reaction of emitter particles. Besides, a simulated model is proposed to explain the effect of oxide particles diffusion on the electrode’s microstructure, electron emission characteristics, and electrode tip erosion. This model suggests metallurgical modifications in tungsten electrode to enhance its erosion resistance.

Keywords: rare-earth emitter particles, temperature-dependent diffusion, TIG welding, Tungsten electrode

Procedia PDF Downloads 173
25879 Outcome-Based Water Resources Management in the Gash River Basin, Eastern Sudan

Authors: Muna Mohamed Omer Mirghani

Abstract:

This paper responds to one of the key national development strategies and a typical challenge in the Gash Basin as well as in different parts of Sudan, namely managing water scarcity in view of climate change impacts in minor water systems sustaining over 50% of the Sudan population. While now focusing on the Gash river basin, the ultimate aim is to replicate the same approach in similar water systems in central and west Sudan. The key objective of the paper is the identification of outcome-based water governance interventions in Gash Basin, guided by the global Sustainable Development Goal six (SDG 6 on water and sanitation) and the Sudan water resource policy framework. The paper concluded that improved water resources management of the Gash Basin is a prerequisite for ensuring desired policy outcomes of groundwater use and flood risk management purposes. Analysis of various water governance dimensions in the Gash indicated that the operationalization of a Basin-level institutional reform is critically focused on informed actors and adapted practices through knowledge and technologies along with the technical data and capacity needed to make that. Adapting the devolved Institutional structure at state level is recommended to strengthen the Gash basin regulatory function and improve compliance of groundwater users.

Keywords: water governance, Gash Basin, integrated groundwater management, Sudan

Procedia PDF Downloads 167
25878 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network

Authors: Li Qingjian, Li Ke, He Chun, Huang Yong

Abstract:

In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.

Keywords: DBN, SOM, pattern classification, hyperspectral, data compression

Procedia PDF Downloads 328
25877 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images

Authors: Masood Varshosaz, Kamyar Hasanpour

Abstract:

In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.

Keywords: human recognition, deep learning, drones, disaster mitigation

Procedia PDF Downloads 78
25876 Understanding the Impact of Climate Change on Farmer's Technical Efficiency in Mali

Authors: Christelle Tchoupé Makougoum

Abstract:

In the context of agriculture, differences across localities in term of climate change can create systematic variation among farmers technical efficiency. Failure to account for climate variability could lead to wrong conclusions about farmers’ technical efficiency and also it could bias the ranking of farmers according to their managerial performance. The literature on agricultural productivity has given little attention to this issue whereas it is necessary for establishing to what extent climate affects farmers efficiency. This article contributes to the preview literature by two ways. First, it proposed a new econometric model that accounting for the climate change influences on technical efficiency in the specific area of agriculture. Second it estimates the inefficiency due to climate change and the real managerial performance of Malian farmers. Using the Mali’s data from agricultural census and CRU TS3 climatic database we implemented an adjusted stochastic frontier methodology to account for the impact of environmental factors. The results yield three main findings. First, instability in temperatures and rainfall decreases technical efficiency on average. Second, the climate change modifies the classification of the farmers according to their efficiency scores. Thirdly it is noted that, although climate changes are partly responsible for the deviation from the border, the capacity of farmers to combine inputs into the optimal proportion is more to undermine. The study concluded that improving farmer efficiency should include fostering their resilience to climate change.

Keywords: agriculture, climate change, stochastic production function, technical efficiency

Procedia PDF Downloads 503
25875 Emotional Artificial Intelligence and the Right to Privacy

Authors: Emine Akar

Abstract:

The majority of privacy-related regulation has traditionally focused on concepts that are perceived to be well-understood or easily describable, such as certain categories of data and personal information or images. In the past century, such regulation appeared reasonably suitable for its purposes. However, technologies such as AI, combined with ever-increasing capabilities to collect, process, and store “big data”, not only require calibration of these traditional understandings but may require re-thinking of entire categories of privacy law. In the presentation, it will be explained, against the background of various emerging technologies under the umbrella term “emotional artificial intelligence”, why modern privacy law will need to embrace human emotions as potentially private subject matter. This argument can be made on a jurisprudential level, given that human emotions can plausibly be accommodated within the various concepts that are traditionally regarded as the underlying foundation of privacy protection, such as, for example, dignity, autonomy, and liberal values. However, the practical reasons for regarding human emotions as potentially private subject matter are perhaps more important (and very likely more convincing from the perspective of regulators). In that respect, it should be regarded as alarming that, according to most projections, the usefulness of emotional data to governments and, particularly, private companies will not only lead to radically increased processing and analysing of such data but, concerningly, to an exponential growth in the collection of such data. In light of this, it is also necessity to discuss options for how regulators could address this emerging threat.

Keywords: AI, privacy law, data protection, big data

Procedia PDF Downloads 77
25874 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform

Authors: Reza Mohammadzadeh

Abstract:

The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.

Keywords: data model, geotechnical risks, machine learning, underground coal mining

Procedia PDF Downloads 257
25873 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method

Authors: Anung Style Bukhori, Ani Dijah Rahajoe

Abstract:

Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.

Keywords: poverty, classification, naïve bayes, Indonesia

Procedia PDF Downloads 41
25872 Web Search Engine Based Naming Procedure for Independent Topic

Authors: Takahiro Nishigaki, Takashi Onoda

Abstract:

In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.

Keywords: independent topic analysis, topic extraction, topic naming, web search engine

Procedia PDF Downloads 108
25871 Estimating the Life-Distribution Parameters of Weibull-Life PV Systems Utilizing Non-Parametric Analysis

Authors: Saleem Z. Ramadan

Abstract:

In this paper, a model is proposed to determine the life distribution parameters of the useful life region for the PV system utilizing a combination of non-parametric and linear regression analysis for the failure data of these systems. Results showed that this method is dependable for analyzing failure time data for such reliable systems when the data is scarce.

Keywords: masking, bathtub model, reliability, non-parametric analysis, useful life

Procedia PDF Downloads 547
25870 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 208
25869 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 126
25868 Endothelial Progenitor Cells Is a Determinant of Vascular Function and Atherosclerosis in Ankylosing Spondylitis

Authors: Ashit Syngle, Inderjit Verma, Pawan Krishan

Abstract:

Objective: Endothelial progenitor cells (EPCs) have reparative potential in overcoming the endothelial dysfunction and reducing cardiovascular risk. EPC depletion has been demonstrated in the setting of established atherosclerotic diseases. With this background, we evaluated whether reduced EPCs population are associated with endothelial dysfunction, subclinical atherosclerosis and inflammatory markers in ankylosing spondylitis (AS) patients without any known traditional cardiovascular risk factor in AS patients. Methods: Levels of circulating EPCs (CD34+/CD133+), brachial artery flow-mediated dilatation, carotid intima-media thickness (CIMT) and inflammatory markers i.e erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), tissue necrosis factor (TNF)–α, interleukin (IL)-6, IL-1 were assessed in 30 AS patients (mean age33.41 ± 10.25; 11 female and 19 male) who fulfilled the modified New York diagnostic criteria with 25 healthy volunteers (mean age 29.36± 8.64; 9 female and 16 male) matched for age and sex. Results: EPCs (CD34+/CD133+) cells were significantly (0.020 ± 0.001% versus 0.040 ± 0.010%, p<0.001) reduced in patients with AS compared to healthy controls. Endothelial function (7.35 ± 2.54 versus 10.27 ±1.73, p=0.002), CIMT (0.63 ± 0.01 versus 0.35 ± 0.02, p < 0.001) and inflammatory markers were also significantly (p < 0.01) altered as compared to healthy controls. Specifically, CD34+CD133+cells were inversely multivariate correlated with CRP and TNF-α and endothelial dysfunction was positively correlated with reduced number of EPC. Conclusion: Depletion of EPCs population is an independent predictor of endothelial dysfunction and early atherosclerosis in AS patients and may provide additional information beyond conventional risk factors and inflammatory markers.

Keywords: endothelial progenitor cells, atherosclerosis, ankylosing spondylitis, cardiovascular

Procedia PDF Downloads 373
25867 Preliminary Design of Maritime Energy Management System: Naval Architectural Approach to Resolve Recent Limitations

Authors: Seyong Jeong, Jinmo Park, Jinhyoun Park, Boram Kim, Kyoungsoo Ahn

Abstract:

Energy management in the maritime industry is being required by economics and in conformity with new legislative actions taken by the International Maritime Organization (IMO) and the European Union (EU). In response, the various performance monitoring methodologies and data collection practices have been examined by different stakeholders. While many assorted advancements in operation and technology are applicable, their adoption in the shipping industry stays small. This slow uptake can be considered due to many different barriers such as data analysis problems, misreported data, and feedback problems, etc. This study presents a conceptual design of an energy management system (EMS) and proposes the methodology to resolve the limitations (e.g., data normalization using naval architectural evaluation, management of misrepresented data, and feedback from shore to ship through management of performance analysis history). We expect this system to make even short-term charterers assess the ship performance properly and implement sustainable fleet control.

Keywords: data normalization, energy management system, naval architectural evaluation, ship performance analysis

Procedia PDF Downloads 439
25866 Requests and Responses to Requests in Jordanian Arabic

Authors: Raghad Abu Salma, Beatrice Szczepek Reed

Abstract:

Politeness is one of the most researched areas in pragmatics as it is key to interpersonal interactional phenomena. Many studies, particularly in linguistics, have focused on developing politeness theories and exploring linguistic devices used in communication to construct and establish social norms. However, the question of what constitutes polite language remains a point of ongoing debate. Prior research primarily examined politeness in English and its native speaking communities, oversimplifying the notion of politeness and associating it with surface-level language use. There is also a dearth of literature on politeness in Arabic, particularly in the context of Jordanian Arabic. Prior research investigating politeness in Arabic make generalized claims about politeness in Arabic without taking the linguistic variations into account or providing empirical evidence. This proposed research aims to explore how Jordanian Arabic influences its first language users in making and responding to requests, exploring participants' perceptions of politeness and the linguistic choices they make in their interactions. The study focuses on Jordanian expats living in London, UK providing an intercultural perspective that prior research does not consider. This study employs a mixed-methods approach combining discourse completion tasks (DCTs) with semi-structured interviews. While DCTs provide insight into participants’ linguistic choices, semi-structured interviews glean insight into participants' perceptions of politeness and their linguistic choices impacted by cultural norms and diverse experiences. This paper discusses previous research on politeness in Arabic, identifies research gaps, and discusses different methods for data collection. This paper also presents preliminary findings from the ongoing study.

Keywords: politeness, pragmatics, jordanian arabic, intercultural politeness

Procedia PDF Downloads 65
25865 Managing Expatriates' Return: Repatriation Practices in a Sample of Firms in Portugal

Authors: Ana Pinheiro, Fatima Suleman

Abstract:

Literature has revealed strong awareness of companies in regard of expatriation, but issues associated with repatriation of employees after an international assignment have been overlooked. Repatriation is one of the most challenging human resource practices that affect how companies benefit from acquired skills and high potential employees; and gain competitive advantage through network developed during expatriation. However, empirical evidence achieved so far suggests that expatriates have been disappointed because companies lack an effective repatriation strategy. Repatriates’ professional and emotional needs are often unrecognized, while repatriation is perceived as a non-issue by companies. The underlying assumption is that the return to parent company, and original country, culture and language does not demand for any particular support. Unfortunately, this basic view has non-negligible consequences on repatriates, especially on expatriate retention and turnover rates after expatriation. The goal of our study is to examine the specific policies and practices adopted by companies to support employees after an international assignment. We assume that expatriation is process which ends with repatriation. The latter is such a crucial issue as the expatriation and require due attention through appropriate design of human resource management policies and tools. For this purpose, we use data from a qualitative research based on interviews to a sample of firms operating in Portugal. We attempt to compare how firms accommodate the concerns with repatriation in their policies and practices. Therefore, the interviews collect data on both expatriation and repatriation process, namely the selection and skills of candidates to expatriation, training, mentoring, communication and pay policies. Portuguese labor market seems to be an interesting case study for mainly two reasons. On the one hand, Portuguese Government is encouraging companies to internationalize in the context of an external market-oriented growth model. On the other hand, expatriation is being perceived as a job opportunity in the context of high unemployment rates of both skilled and non-skilled. This is an ongoing research and the data collected until now indicate that companies follow the pattern described in the literature. The interviewed companies recognize the higher relevance of repatriation process than expatriation, but disregard specific human resource policies. They have perceived that unfavorable labor market conditions discourage mobility across companies. It should be stressed that companies underline that employees enhanced the relevance of stable jobs and attach far less importance to career development and other benefits after expatriation. However, there are still cases of turnover and difficulties of retention. Managers’ report non-negligible cases of turnover associated with lack of effective repatriation programs and non-recognition of good performance. Repatriates seem to having acquired entrepreneurial spirit and skills and often create their own company. These results suggest that even in the context of worsening labor market conditions, there should be greater awareness of the need to retain talents, experienced and highly skills employees. Ultimately, other companies poach invaluable assets, while internationalized companies risk being training providers.

Keywords: expatriates, expatriation, international management, repatriation

Procedia PDF Downloads 325
25864 Geospatial Data Complexity in Electronic Airport Layout Plan

Authors: Shyam Parhi

Abstract:

Airports GIS program collects Airports data, validate and verify it, and stores it in specific database. Airports GIS allows authorized users to submit changes to airport data. The verified data is used to develop several engineering applications. One of these applications is electronic Airport Layout Plan (eALP) whose primary aim is to move from paper to digital form of ALP. The first phase of development of eALP was completed recently and it was tested for a few pilot program airports across different regions. We conducted gap analysis and noticed that a lot of development work is needed to fine tune at least six mandatory sheets of eALP. It is important to note that significant amount of programming is needed to move from out-of-box ArcGIS to a much customized ArcGIS which will be discussed. The ArcGIS viewer capability to display essential features like runway or taxiway or the perpendicular distance between them will be discussed. An enterprise level workflow which incorporates coordination process among different lines of business will be highlighted.

Keywords: geospatial data, geology, geographic information systems, aviation

Procedia PDF Downloads 402
25863 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company

Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze

Abstract:

As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.

Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste

Procedia PDF Downloads 115