Search results for: secure hashing algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4185

Search results for: secure hashing algorithm

2175 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble

Authors: Jaehong Yu, Seoung Bum Kim

Abstract:

Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.

Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking

Procedia PDF Downloads 328
2174 Modified Weibull Approach for Bridge Deterioration Modelling

Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight

Abstract:

State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.

Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models

Procedia PDF Downloads 720
2173 Graphical Theoretical Construction of Discrete time Share Price Paths from Matroid

Authors: Min Wang, Sergey Utev

Abstract:

The lessons from the 2007-09 global financial crisis have driven scientific research, which considers the design of new methodologies and financial models in the global market. The quantum mechanics approach was introduced in the unpredictable stock market modeling. One famous quantum tool is Feynman path integral method, which was used to model insurance risk by Tamturk and Utev and adapted to formalize the path-dependent option pricing by Hao and Utev. The research is based on the path-dependent calculation method, which is motivated by the Feynman path integral method. The path calculation can be studied in two ways, one way is to label, and the other is computational. Labeling is a part of the representation of objects, and generating functions can provide many different ways of representing share price paths. In this paper, the recent works on graphical theoretical construction of individual share price path via matroid is presented. Firstly, a study is done on the knowledge of matroid, relationship between lattice path matroid and Tutte polynomials and ways to connect points in the lattice path matroid and Tutte polynomials is suggested. Secondly, It is found that a general binary tree can be validly constructed from a connected lattice path matroid rather than general lattice path matroid. Lastly, it is suggested that there is a way to represent share price paths via a general binary tree, and an algorithm is developed to construct share price paths from general binary trees. A relationship is also provided between lattice integer points and Tutte polynomials of a transversal matroid. Use this way of connection together with the algorithm, a share price path can be constructed from a given connected lattice path matroid.

Keywords: combinatorial construction, graphical representation, matroid, path calculation, share price, Tutte polynomial

Procedia PDF Downloads 128
2172 ChatGPT as a “Foreign Language Teacher”: Attitudes of Tunisian English Language Learners

Authors: Leila Najeh Bel'Kiry

Abstract:

Artificial intelligence (AI) brought about many language robots, with ChatGPT being the most sophisticated thanks to its human-like linguistic capabilities. This aspect raises the idea of using ChatGPT in learning foreign languages. Starting from the premise that positions ChatGPT as a mediator between the language and the leaner, functioning as a “ghost teacher" offering a peaceful and secure learning space, this study aims to explore the attitudes of Tunisian students of English towards ChatGPT as a “Foreign Language Teacher” . Forty-five students, in their third year of fundamental English at Tunisian universities and high institutes, completed a Likert scale questionnaire consisting of thirty-two items and covering various aspects of language (phonology, morphology, syntax, semantics, and pragmatics). A scale ranging from 'Strongly Disagree,' 'Disagree,' 'Undecided,' 'Agree,' to 'Strongly Agree.' is used to assess the attitudes of the participants towards the integration of ChaGPTin learning a foreign language. Results indicate generally positive attitudes towards the reliance on ChatGPT in learning foreign languages, particularly some compounds of language like syntax, phonology, and morphology. However, learners show insecurity towards ChatGPT when it comes to pragmatics and semantics, where the artificial model may fail when dealing with deeper contextual and nuanced language levels.

Keywords: artificial language model, attitudes, foreign language learning, ChatGPT, linguistic capabilities, Tunisian English language learners

Procedia PDF Downloads 52
2171 Prospects of Oman as a Destination for Halal Tourism

Authors: Asad Rehman

Abstract:

Although a vast majority relates the concept of ‘halal’ or what is permissible in Islam to food only. However, halal industry covers many sectors such as food, fashion, transport, finance and even tourism. Halal tourism is not just about halal food; it is also about the overall experience, which is amenable with the Shariah (Islamic jurisprudence). Oman has a plethora of natural beauty and many places of interest for all types of tourists. It is one of the most secure and peaceful countries in the world. Having a well-developed Infrastructure, Oman is ready to take its tourism to new heights. The ever-hospitable Omanis are proud of their rich cultural and historical heritage. Thus, Oman appears to have all what it takes to become a prime destination for halal tourism. The objective of this study is to assess the prospects of Oman as a destination for halal tourism. Based on the interviews of experts like academicians, tourism professionals, officials and clerics, Oman’s competitiveness as a destination for halal tourism was assessed by developing a Strengths, Weaknesses, Opportunities and Threats (SWOT) profile. The findings of the SWOT were compared with the data from the Global Muslim Travel Index (GMTI) from the year 2014 to 2018. Based on the analysis, Oman is found to have the right mix of environment and enabling services for halal tourism. However, it is found lacking in public transport, communication and customer outreach. Oman is also found to be losing its rank among the top 10 destinations for halal tourism to close competitors like Qatar, Bahrain, Morocco, etc. The concerned authorities need to make conscious efforts to resolve these issues as it becomes imperative for Oman to revamp its tourism strategy.

Keywords: destination, halal, Islam, SWOT, tourism

Procedia PDF Downloads 143
2170 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 80
2169 E-Government Adoption in Zimbabwe's Local Government: Understanding the Influence of Attitudes and Perceptions of Residents in Selected Cases

Authors: Ricky Munyaradzi Mukonza

Abstract:

E-government literature continues to grow as scholars and practitioners endeavour to understand this phenomenon. There are many facets of e-government that have been written about including its definition, adoption, and implementation and so on. However, more still needs to be known particularly in relation to how e-government is being adopted in different contexts. There could be many context specific factors that have a bearing on e-government adoption and in this paper focus is on attitudes and perceptions. Association between usage of e-government services and various perceptions such as ease of use, transparency, security, ease of understanding, communication, reliability, relevancy, perceived usefulness and perceived trust is examined. Within the Zimbabwean context and in particular the country’s local government sphere, such a study has not been done. The main aim of the paper is therefore to establish perceptions and attitudes towards e-government services among residents in Zimbabwe’s two local authorities. In terms of research methodology the paper is based on a Mixed Methods Approach (MMA) to collect and analyse data giving the researcher a holistic picture of the phenomenon being investigated. A sample of 785 residents from the two local authorities was used and these were selected using a combination of cluster and purposive sampling methods. A key finding in this paper is that a majority of respondents who have had the opportunity to use e-government services perceive the services to be easy to use, transparent, secure, easy to understand, reliable, relevant, useful and trustworthy. The paper, therefore, makes an important contribution on the relationship between residents’ perceptions and attitudes and e-government usage within the chosen cases.

Keywords: adoption, attitudes, e-government, perceptions

Procedia PDF Downloads 305
2168 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 267
2167 Environmental Strategies Towards Sustainable Development in Nigeria

Authors: Sirajoddeen Al-Ameen

Abstract:

Researchers seek to introduce development leading to technologies that address environmental problems and learn how to interact with stakeholders, managers, and policymakers for appropriate actions. One of the greatest strategies that African countries need to consider in realizing sustainable development is effective, efficient, credible, and lasting environmental sustainability and ensuring that future generations have access to natural resources to live in a better way. Therefore the coordinated set of participatory and continuously improving processes of analysis, capacity, planning, and investment seeks to integrate the social and environmental objectives of society, and this is not given priority in Nigeria. Environmental sustainability is a field where people can understand the natural environment and public works for sustainable development. Sustainable development requires shifts from ordinary ways of doing things to modern ways of executing activities ranging from low to high productivity, the creation and adoption of new strategies, new skills, and knowledge. It ensures a developed world with a secure and healthy environment for all; human beings, animals, and plants alike. This paper is to carry out a review of various literature sources to ascertain the potential strategy of environment and sustainable development reform using the content analysis method to discuss the environmental strategies towards sustainable development in Nigeria. The objective of this paper is to enable Nigerians to understand and have an orientation on how to manage environmental resources and avoid environmental impact on the ecosystem, and also to find sustainable solutions for environmental issues without compromising economic development.

Keywords: development, environment, strategies, sustainable

Procedia PDF Downloads 102
2166 Neural Network Based Control Algorithm for Inhabitable Spaces Applying Emotional Domotics

Authors: Sergio A. Navarro Tuch, Martin Rogelio Bustamante Bello, Leopoldo Julian Lechuga Lopez

Abstract:

In recent years, Mexico’s population has seen a rise of different physiological and mental negative states. Two main consequences of this problematic are deficient work performance and high levels of stress generating and important impact on a person’s physical, mental and emotional health. Several approaches, such as the use of audiovisual stimulus to induce emotions and modify a person’s emotional state, can be applied in an effort to decreases these negative effects. With the use of different non-invasive physiological sensors such as EEG, luminosity and face recognition we gather information of the subject’s current emotional state. In a controlled environment, a subject is shown a series of selected images from the International Affective Picture System (IAPS) in order to induce a specific set of emotions and obtain information from the sensors. The raw data obtained is statistically analyzed in order to filter only the specific groups of information that relate to a subject’s emotions and current values of the physical variables in the controlled environment such as, luminosity, RGB light color, temperature, oxygen level and noise. Finally, a neural network based control algorithm is given the data obtained in order to feedback the system and automate the modification of the environment variables and audiovisual content shown in an effort that these changes can positively alter the subject’s emotional state. During the research, it was found that the light color was directly related to the type of impact generated by the audiovisual content on the subject’s emotional state. Red illumination increased the impact of violent images and green illumination along with relaxing images decreased the subject’s levels of anxiety. Specific differences between men and women were found as to which type of images generated a greater impact in either gender. The population sample was mainly constituted by college students whose data analysis showed a decreased sensibility to violence towards humans. Despite the early stage of the control algorithm, the results obtained from the population sample give us a better insight into the possibilities of emotional domotics and the applications that can be created towards the improvement of performance in people’s lives. The objective of this research is to create a positive impact with the application of technology to everyday activities; nonetheless, an ethical problem arises since this can also be applied to control a person’s emotions and shift their decision making.

Keywords: data analysis, emotional domotics, performance improvement, neural network

Procedia PDF Downloads 133
2165 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 122
2164 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 425
2163 Institutional Structures Shaping Female Representation in Politics in Pakistan

Authors: Neelum Maqsood

Abstract:

This paper is a study of how institutional structures shape the policy-making activities of female legislators. The literature on this area indicates that if there is an institution created by men to secure elite interests, women will face constraints in legislative activities. This paper will analyze the institutional setting in Pakistan and document the conditions women face that both restrict or enable them from representing the general interests of other women. The main experimental design depends on the variation of international scrutiny that Pakistan faces in two different time periods that will be classified as high international scrutiny and low international scrutiny. A high international scrutiny period is one where Pakistan comes under the international lens because of a domestic event that has international ramifications, for example, in terms of gender equality. The argument is that women parliamentarians receive different treatment in periods of high international scrutiny. As Pakistan comes under scrutiny, women will be more active in their legislative activities than in low international scrutiny, as male parliamentarians will be less likely to influence or restrain women’s activities. Using this variation, the trends in memberships and support functions given to women in these two time periods will be studied. The second variation will comprise the analysis of male and female assignments, training, and funding on general seats across time, which will require data collection over this time of 12-15 years, including the years during the war when Pakistan was under high international scrutiny.

Keywords: female representation, gender equality, democratic institutions, quota seats

Procedia PDF Downloads 80
2162 Three Tier Indoor Localization System for Digital Forensics

Authors: Dennis L. Owuor, Okuthe P. Kogeda, Johnson I. Agbinya

Abstract:

Mobile localization has attracted a great deal of attention recently due to the introduction of wireless networks. Although several localization algorithms and systems have been implemented and discussed in the literature, very few researchers have exploited the gap that exists between indoor localization, tracking, external storage of location information and outdoor localization for the purpose of digital forensics during and after a disaster. The contribution of this paper lies in the implementation of a robust system that is capable of locating, tracking mobile device users and store location information for both indoor and partially outdoor the cloud. The system can be used during disaster to track and locate mobile phone users. The developed system is a mobile application built based on Android, Hypertext Preprocessor (PHP), Cascading Style Sheets (CSS), JavaScript and MATLAB for the Android mobile users. Using Waterfall model of software development, we have implemented a three level system that is able to track, locate and store mobile device information in secure database (cloud) on almost a real time basis. The outcome of the study showed that the developed system is efficient with regard to the tracking and locating mobile devices. The system is also flexible, i.e. can be used in any building with fewer adjustments. Finally, the system is accurate for both indoor and outdoor in terms of locating and tracking mobile devices.

Keywords: indoor localization, digital forensics, fingerprinting, tracking and cloud

Procedia PDF Downloads 323
2161 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 71
2160 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 86
2159 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 169
2158 Revisiting the Surgical Approaches to Decompression in Quadrangular Space Syndrome: A Cadaveric Study

Authors: Sundip Charmode, Simmi Mehra, Sudhir Kushwaha, Shalom Philip, Pratik Amrutiya, Ranjna Jangal

Abstract:

Introduction: Quadrangular space syndrome involves compression of the axillary nerve and posterior circumflex humeral artery and its management in few cases, requires surgical decompression. The current study reviews the surgical approaches used in the decompression of neurovascular structures and presents our reflections and recommendations. Methods: Four human cadavers, in the Department of Anatomy were used for dissection of the Axillae and the Scapular region by the senior residents of the Department of Anatomy and Department of Orthopedics, who dissected quadrangular space in the eight upper limbs, using anterior and posterior surgical approaches. Observations: Posterior approach to identify the quadrangular space and secure its contents was recognized as the easier and much quicker method by both the Anatomy and Orthopedic residents, but it may result in increased postoperative morbidity. Whereas the anterior (Delto-pectoral) approach involves more skill but reduces postoperative morbidity. Conclusions: Anterior (Delto-pectoral) approach with suggested modifications can prove as an effective method in surgical decompression of quadrangular space syndrome. The authors suggest more cadaveric studies to facilitate anatomists and surgeons with the opportunities to practice and evaluate older and newer surgical approaches.

Keywords: surgical approach, anatomical approach, decompression, axillary nerve, quadrangular space

Procedia PDF Downloads 160
2157 Attachment Patterns in a Sample of South African Children at Risk in Middle Childhood

Authors: Renate Gericke, Carol Long

Abstract:

Despite the robust empirical support of attachment, advancement in the description and conceptualization of attachment has been slow and has not significantly advanced beyond the identification of attachment security or type (namely, secure, avoidant, ambivalent and disorganized). This has continued despite papers arguing for theoretical refinement in the classification of attachment presentations. For thinking and practice to advance, it is critically important that these categories and their assessment be interrogated in different contexts and across developmental age. To achieve this, a quantitative design was used with descriptive and inferential statistics, and general linear models were employed to analyze the data. The Attachment Story Completion Test (ASCT) was administered to 105 children between the ages of eight and twelve from socio-economically deprived contexts with high exposure to trauma. A staggering 93% of the children had insecure attachments (specifically, avoidant 37%, disorganized 34% and ambivalent 22%) and attachment was more complex than currently conceptualized in the attachment literature. Primary attachment did not only present as one of four discreet categories, but 70% of the sample had a complex attachment with more than one type of maternal attachment style. Attachment intensity also varied along a continuum (between 1 and 5). The findings have implications for a) research that has not considered the potential complexity of attachment or attachment intensity, b) policy to more actively support mother-infant dyads, particularly in high-risk contexts and c) question the applicability of a western conceptualization of a primary maternal attachment figure in non-western collectivist societies.

Keywords: attachment, children at risk, middle childhood, non-western context

Procedia PDF Downloads 185
2156 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 293
2155 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach

Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao

Abstract:

Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.

Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search

Procedia PDF Downloads 68
2154 A Comparative Analysis of Classification Models with Wrapper-Based Feature Selection for Predicting Student Academic Performance

Authors: Abdullah Al Farwan, Ya Zhang

Abstract:

In today’s educational arena, it is critical to understand educational data and be able to evaluate important aspects, particularly data on student achievement. Educational Data Mining (EDM) is a research area that focusing on uncovering patterns and information in data from educational institutions. Teachers, if they are able to predict their students' class performance, can use this information to improve their teaching abilities. It has evolved into valuable knowledge that can be used for a wide range of objectives; for example, a strategic plan can be used to generate high-quality education. Based on previous data, this paper recommends employing data mining techniques to forecast students' final grades. In this study, five data mining methods, Decision Tree, JRip, Naive Bayes, Multi-layer Perceptron, and Random Forest with wrapper feature selection, were used on two datasets relating to Portuguese language and mathematics classes lessons. The results showed the effectiveness of using data mining learning methodologies in predicting student academic success. The classification accuracy achieved with selected algorithms lies in the range of 80-94%. Among all the selected classification algorithms, the lowest accuracy is achieved by the Multi-layer Perceptron algorithm, which is close to 70.45%, and the highest accuracy is achieved by the Random Forest algorithm, which is close to 94.10%. This proposed work can assist educational administrators to identify poor performing students at an early stage and perhaps implement motivational interventions to improve their academic success and prevent educational dropout.

Keywords: classification algorithms, decision tree, feature selection, multi-layer perceptron, Naïve Bayes, random forest, students’ academic performance

Procedia PDF Downloads 155
2153 Securitization of Illegal Fishing Cases in Natuna Waters by Indonesian Government: Study Case of Chinese Vessels Shootouts 2016

Authors: Ray Maximillian, Idil Syawfi

Abstract:

Indonesia’s Exclusive Economic Zone and the infamous China’s nine-dash line are intersected in Natuna waters. Even though from Indonesia perspective, that line does not possess any legal basis, China treat that line as their national boundaries, therefore allowing Chinese fishermen to fish in the area. Under President Joko Widodo leadership, Indonesia which now focusing to suppress illegal fishing cases while emphasizing their maritime sovereignty is facing an imminent threat from China’s presence in Natuna. Tension between these countries spiked after three incident happened on 2016, especially after Indonesian navy shot Chinese fishermen vessel that suspected doing illegal fishing activity. This action seen as an attempt to secure Indonesia’s law enforcement in their waters after several months before such attempt was intervened by Chinese coast guard. Indonesia tries to securitize this issue to justify the shooting they done to Chinese vessels. In the process of securitization, it is imperative to identify the existential threat that leads to implementation of emergency measures which responded by units in the cases. Chinese coast guard presence in Natuna perceived as an existential threat to Indonesia, therefore, responded by shooting to Chinese vessels on the next encounter. This action then responded by Chinese government who said that there is overlapping claim between them and Indonesia in Natuna.

Keywords: China, illegal fishing, Indonesia, natuna, securitization

Procedia PDF Downloads 207
2152 Consortium Blockchain-based Model for Data Management Applications in the Healthcare Sector

Authors: Teo Hao Jing, Shane Ho Ken Wae, Lee Jin Yu, Burra Venkata Durga Kumar

Abstract:

Current distributed healthcare systems face the challenge of interoperability of health data. Storing electronic health records (EHR) in local databases causes them to be fragmented. This problem is aggravated as patients visit multiple healthcare providers in their lifetime. Existing solutions are unable to solve this issue and have caused burdens to healthcare specialists and patients alike. Blockchain technology was found to be able to increase the interoperability of health data by implementing digital access rules, enabling uniformed patient identity, and providing data aggregation. Consortium blockchain was found to have high read throughputs, is more trustworthy, more secure against external disruptions and accommodates transactions without fees. Therefore, this paper proposes a blockchain-based model for data management applications. In this model, a consortium blockchain is implemented by using a delegated proof of stake (DPoS) as its consensus mechanism. This blockchain allows collaboration between users from different organizations such as hospitals and medical bureaus. Patients serve as the owner of their information, where users from other parties require authorization from the patient to view their information. Hospitals upload the hash value of patients’ generated data to the blockchain, whereas the encrypted information is stored in a distributed cloud storage.

Keywords: blockchain technology, data management applications, healthcare, interoperability, delegated proof of stake

Procedia PDF Downloads 129
2151 Arbitration in Foreign Investment: The Need for Equitable Treatment between the Investor and the Host State

Authors: Maria João Mimoso, Bárbara Magalhães Bravo

Abstract:

This study aims to analyse the phenomenon of arbitration as a paradigm in solving emerging controversies of foreign investment. We will present their benefits and demonstrate their contribution to greater legal certainty in economic relations. This article explores the legal relevant concepts under a strictly conceptual methodology, preparing future research to be developed under more developed comparative law methodologies. The review of national and international literature and jurisprudence will reveal the importance of arbitration in the field of international economic relations, presenting it as an alternative dispute resolution. Globalization imposes new forms of investment protection and appeals to other forms of dispute settlement, primarily to prevent, among other problems, the possible bias of the recipient country's investment tribunals. Characterization of foreign investment, its regulatory sources, their characteristics and the need for intervention of an entity capable of resolving disputes between the parties involved: State investor reception; Investor (of a nationality other than the latter); State of the investor's nationality, and sometimes a ‘subsidiary’ local foreign investor. The ICSID (International Settlement of Investment Disputes) arbitration as a means of resolving investment litigations covered by bilateral treaties (BIT) and investment contracts calls for a delimitation of these two figures in order to clarify the scope of the arbitration under the aegis of the World Bank and to make it more secure in the view of the sovereign power of the States.

Keywords: arbitration, contract, foreign, investment, disputes

Procedia PDF Downloads 260
2150 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 106
2149 Aerodynamic Optimum Nose Shape Change of High-Speed Train by Design Variable Variation

Authors: Minho Kwak, Suhwan Yun, Choonsoo Park

Abstract:

Nose shape optimizations of high-speed train are performed for the improvement of aerodynamic characteristics. Based on the commercial train, KTX-Sancheon, multi-objective optimizations are conducted for the improvement of the side wind stability and the micro-pressure wave following the optimization for the reduction of aerodynamic drag. 3D nose shapes are modelled by the Vehicle Modeling Function. Aerodynamic drag and side wind stability are calculated by three-dimensional compressible Navier-Stokes solver, and micro pressure wave is done by axi-symmetric compressible Navier-Stokes solver. The Maxi-min Latin Hypercube Sampling method is used to extract sampling points to construct the approximation model. The kriging model is constructed for the approximation model and the NSGA-II algorithm was used as the multi-objective optimization algorithm. Nose length, nose tip height, and lower surface curvature are design variables. Because nose length is a dominant variable for aerodynamic characteristics of train nose, two optimization processes are progressed respectively with and without the design variable, nose length. Each pareto set was obtained and each optimized nose shape is selected respectively considering Honam high-speed rail line infrastructure in South Korea. Through the optimization process with the nose length, when compared to KTX Sancheon, aerodynamic drag was reduced by 9.0%, side wind stability was improved by 4.5%, micro-pressure wave was reduced by 5.4% whereas aerodynamic drag by 7.3%, side wind stability by 3.9%, micro-pressure wave by 3.9%, without the nose length. As a result of comparison between two optimized shapes, similar shapes are extracted other than the effect of nose length.

Keywords: aerodynamic characteristics, design variable, multi-objective optimization, train nose shape

Procedia PDF Downloads 343
2148 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 54
2147 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization

Procedia PDF Downloads 112
2146 Designing the First Oil Tanker Shipyard Facility in Kuwait

Authors: Fatma Al Abdullah, Shahad Al Ameer, Ritaj Jaragh, Fatimah Khajah, Rawan Qambar, Amr Nounou

Abstract:

Kuwait currently manufactures its tankers in foreign countries. Oil tankers play a role in the supply chain of the oil industry. Therefore, with Kuwait’s sufficient financial resources, the country should secure itself strategically in order to protect its oil industry to sustain economic development. The purpose of this report is designing an oil tankers’ shipyard facility. Basing the shipyard facility in Kuwait will have great economic rewards. The shipbuilding industry directly enhances the industrial chain in terms of new job and business opportunities as well as educational fields. Heavy Engineering Industries & Shipbuilding Co. K.S.C. (HEISCO) was chosen as a host due to benefits that will result from HEISCO’s existing infrastructure and expertise to reduce cost. The Facility Design methodology chosen has been used because it covers all aspects needed for the report. The oil tanker market is witnessing a shift from crude tankers to product tankers. Therefore the Panamax tanker (product tanker) was selected to be manufactured in the facility. The different departments needed in shipyards were identified based on studying different global shipyards. Technologies needed to build ships helped in the process design. It was noticed that ships are engineer to order. The new layout development of the proposed shipyard is currently in progress. A feasibility study will be conducted to ensure the success of the facility after developing the shipyard’s layout.

Keywords: oil tankers, shipbuilding, shipyard, facility design, Kuwait

Procedia PDF Downloads 459