Search results for: Fairclough’s approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13863

Search results for: Fairclough’s approach

12213 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification

Procedia PDF Downloads 346
12212 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 323
12211 A General Iterative Nonlinear Programming Method to Synthesize Heat Exchanger Network

Authors: Rupu Yang, Cong Toan Tran, Assaad Zoughaib

Abstract:

The work provides an iterative nonlinear programming method to synthesize a heat exchanger network by manipulating the trade-offs between the heat load of process heat exchangers (HEs) and utilities. We consider for the synthesis problem two cases, the first one without fixed cost for HEs, and the second one with fixed cost. For the no fixed cost problem, the nonlinear programming (NLP) model with all the potential HEs is optimized to obtain the global optimum. For the case with fixed cost, the NLP model is iterated through adding/removing HEs. The method was applied in five case studies and illustrated quite well effectiveness. Among which, the approach reaches the lowest TAC (2,904,026$/year) compared with the best record for the famous Aromatic plants problem. It also locates a slightly better design than records in literature for a 10 streams case without fixed cost with only 1/9 computational time. Moreover, compared to the traditional mixed-integer nonlinear programming approach, the iterative NLP method opens a possibility to consider constraints (such as controllability or dynamic performances) that require knowing the structure of the network to be calculated.

Keywords: heat exchanger network, synthesis, NLP, optimization

Procedia PDF Downloads 159
12210 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 186
12209 Sentiment Analysis: An Enhancement of Ontological-Based Features Extraction Techniques and Word Equations

Authors: Mohd Ridzwan Yaakub, Muhammad Iqbal Abu Latiffi

Abstract:

Online business has become popular recently due to the massive amount of information and medium available on the Internet. This has resulted in the huge number of reviews where the consumers share their opinion, criticisms, and satisfaction on the products they have purchased on the websites or the social media such as Facebook and Twitter. However, to analyze customer’s behavior has become very important for organizations to find new market trends and insights. The reviews from the websites or the social media are in structured and unstructured data that need a sentiment analysis approach in analyzing customer’s review. In this article, techniques used in will be defined. Definition of the ontology and description of its possible usage in sentiment analysis will be defined. It will lead to empirical research that related to mobile phones used in research and the ontology used in the experiment. The researcher also will explore the role of preprocessing data and feature selection methodology. As the result, ontology-based approach in sentiment analysis can help in achieving high accuracy for the classification task.

Keywords: feature selection, ontology, opinion, preprocessing data, sentiment analysis

Procedia PDF Downloads 199
12208 Global Pandemic of Chronic Diseases: Public Health Challenges to Reduce the Development

Authors: Benjamin Poku

Abstract:

Purpose: The purpose of the research is to conduct systematic reviews and synthesis of existing knowledge that addresses the growing incidence and prevalence of chronic diseases across the world and its impact on public health in relation to communicable diseases. Principal results: A careful compilation and summary of 15-20 peer-reviewed publications from reputable databases such as PubMed, MEDLINE, CINAHL, and other peer-reviewed journals indicate that the Global pandemic of Chronic diseases (such as diabetes, high blood pressure, etc.) have become a greater public health burden in proportion as compared to communicable diseases. Significant conclusions: Given the complexity of the situation, efforts and strategies to mitigate the negative effect of the Global Pandemic on chronic diseases within the global community must include not only urgent and binding commitment of all stakeholders but also a multi-sectorial long-term approach to increase the public health educational approach to meet the increasing world population of over 8 billion people and also the aging population as well to meet the complex challenges of chronic diseases.

Keywords: pandemic, chronic disease, public health, health challenges

Procedia PDF Downloads 527
12207 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 52
12206 A Low-Cost Experimental Approach for Teaching Energy Quantization: Determining the Planck Constant with Arduino and Led

Authors: Gastão Soares Ximenes de Oliveira, Richar Nicolás Durán, Romeo Micah Szmoski, Eloiza Aparecida Avila de Matos, Elano Gustavo Rein

Abstract:

This article aims to present an experimental method to determine Planck's constant by calculating the cutting potential V₀ from LEDs with different wavelengths. The experiment is designed using Arduino as a central tool in order to make the experimental activity more engaging and attractive for students with the use of digital technologies. From the characteristic curves of each LED, graphical analysis was used to obtain the cutting potential, and knowing the corresponding wavelength, it was possible to calculate Planck's constant. This constant was also obtained from the linear adjustment of the cutting potential graph by the frequency of each LED. Given the relevance of Planck's constant in physics, it is believed that this experiment can offer teachers the opportunity to approach concepts from modern physics, such as the quantization of energy, in a more accessible and applied way in the classroom. This will not only enrich students' understanding of the fundamental nature of matter but also encourage deeper engagement with the principles of quantum physics.

Keywords: physics teaching, educational technology, modern physics, Planck constant, Arduino

Procedia PDF Downloads 74
12205 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring

Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang

Abstract:

Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.

Keywords: building, image matching, temperature, unmanned aerial vehicle

Procedia PDF Downloads 290
12204 Exploring the Situational Approach to Decision Making: User eConsent on a Health Social Network

Authors: W. Rowan, Y. O’Connor, L. Lynch, C. Heavin

Abstract:

Situation Awareness can offer the potential for conscious dynamic reflection. In an era of online health data sharing, it is becoming increasingly important that users of health social networks (HSNs) have the information necessary to make informed decisions as part of the registration process and in the provision of eConsent. This research aims to leverage an adapted Situation Awareness (SA) model to explore users’ decision making processes in the provision of eConsent. A HSN platform was used to investigate these behaviours. A mixed methods approach was taken. This involved the observation of registration behaviours followed by a questionnaire and focus group/s. Early results suggest that users are apt to automatically accept eConsent, and only later consider the long-term implications of sharing their personal health information. Further steps are required to continue developing knowledge and understanding of this important eConsent process. The next step in this research will be to develop a set of guidelines for the improved presentation of eConsent on the HSN platform.

Keywords: eConsent, health social network, mixed methods, situation awareness

Procedia PDF Downloads 291
12203 Digital Content Strategy (DCS) Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is to establish an agreed definition for the notion of Digital Content Strategy, which currently does not exist, as DCS is viewed from an excessive number of different angles. A strategic approach to content, nonetheless, is required, both practically and contextually. The researchers, therefore, aimed at attempting to identify the key content components comprising a digital content strategy to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of Digital Content Strategy (DCS) and related aspects, using the PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data was collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources related to the issues discussed, the researchers revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of Digital Content Strategy and can be considered for implementation in a business retail setting.

Keywords: digital content strategy, key content components, websites, digital marketing strategy

Procedia PDF Downloads 145
12202 CRISPR Technology: A Tool in the Potential Cure for COVID-19 Virus

Authors: Chijindu Okpalaoka, Charles Chinedu Onuselogu

Abstract:

COVID-19, humanity's coronavirus disease caused by SARS-CoV-2, was first detected in late 2019 in Wuhan, China. COVID-19 lacked an established conventional pharmaceutical therapy, and as a result, the outbreak quickly became an epidemic affecting the entire World. Only a qPCR assay is reliable for diagnosing COVID-19. Clustered, regularly interspaced short palindromic repeats (CRISPR) technology is being researched for speedy and specific identification of COVID-19, among other therapeutic techniques. Apart from its therapeutic capabilities, the CRISPR technique is being evaluated to develop antiviral therapies; nevertheless, no CRISPR-based medication has been approved for human use to date. Prophylactic antiviral CRISPR in living being cells, a Cas 13-based approach against coronavirus, has been developed. While this method can be evolved into a treatment approach, it may face substantial obstacles in human clinical trials for licensure. This study discussed the potential applications of CRISPR-based techniques for developing a speedy and accurate feasible treatment alternative for the COVID-19 virus.

Keywords: COVID-19, CRISPR technique, Cas13, SARS-CoV-2, prophylactic antiviral

Procedia PDF Downloads 123
12201 Effectiveness of Blended Learning in Public School During Covid-19: A Way Forward

Authors: Sumaira Taj

Abstract:

Blended learning is emerged as a prerequisite approach for teaching in all schools after the outbreak of the COVID-19 pandemic. However, how much public elementary and secondary schools in Pakistan are ready for adapting this approach and what should be done to prepare schools and students for blended learning are the questions that this paper attempts to answer. Mixed-method research methodology was used to collect data from 40 teachers, 500 students, and 10 mothers. Descriptive statistics was used to analyze quantitative data. As for as readiness is concerned, schools lack resources for blended/ virtual/ online classes from infra-structure to skills, parents’ literacy level hindered students’ learning process and teachers’ skills presented challenges in a smooth and swift shift of the schools from face-to-face learning to blended learning. It is recommended to establish a conducive environment in schools by providing all required resources and skills. Special trainings should be organized for low literacy level parents. Multiple ways should be adopted to benefit all students.

Keywords: blended learning, challenges in online classes, education in covid-19, public schools in pakistan

Procedia PDF Downloads 166
12200 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 374
12199 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 124
12198 Acetic Acid Assisted Phytoextraction of Chromium (Cr) by Energy Crop (Arundo donax L.) in Cr Contaminated Soils

Authors: Muhammad Iqbal, Hafiz Muhammad Tauqeer, Hamza Rafaqat, Muhammad Naveed, Muhammad Awais Irshad

Abstract:

Soil pollution with chromium (Cr) has become one of the most important concerns due to its toxicity for humans. To date, various remediation approaches have been employed for the remediation and management of Cr contaminated soils. Phytoextraction is an eco-friendly and emerging remediation approach which has gained attention due to several advantages over conventional remediation approach. The use of energy crops for phytoremediation is an emerging trend worldwide. These energy crops have high tolerance against various environmental stresses, the potential to grow in diverse ecosystems and high biomass production make them a suitable candidate for phytoremediation of contaminated soils. The removal efficiency of plants in phytoextraction depends upon several soil and plant factors including solubility, bioavailability and metal speciation in soils. A pot scale experiment was conducted to evaluate the phytoextraction potential of Arundo donax L. with the application of acetic acid (A.A) in Cr contaminated soils. Plants were grown in pots filled with 5 kg soils for 90 days. After 30 days plants acclimatization in pot conditions, plants were treated with various levels of Cr (2.5 mM, 5 mM, 7.5 mM, 10 mM) and A.A (Cr 2.5 mM + A.A 2.5 mM, Cr 5 mM + A.A 2.5 mM, Cr 7.5 mM + A.A 2.5 mM, Cr 10 mM + A.A 2.5 mM). The application of A.A significantly increased metal uptake and in roots and shoots of A. donax. This increase was observed at Cr 7.5 mM + A.A 2.5 mM but at high concentrations, visual symptoms of Cr toxicity were observed on leaves. Similarly, A.A applications also affect the activities of key enzymes including catalase (CAT), superoxidase dismutase (SOD), and ascorbate peroxidase (APX) in leaves of A. donax. Based on results it is concluded that the applications of A.A acid for phytoextraction is an alternative approach for the management of Cr affected soils and synthetic chelators should be replaced with organic acids.

Keywords: acetic acid, A. donax, chromium, energy crop, phytoextraction

Procedia PDF Downloads 387
12197 A Statistical Approach to Rationalise the Number of Working Load Test for Quality Control of Pile Installation in Singapore Jurong Formation

Authors: Nuo Xu, Kok Hun Goh, Jeyatharan Kumarasamy

Abstract:

Pile load testing is significant during foundation construction due to its traditional role of design validation and routine quality control of the piling works. In order to verify whether piles can take loadings at specified settlements, piles will have to undergo working load test where the test load should normally up to 150% of the working load of a pile. Selection or sampling of piles for the working load test is done subject to the number specified in Singapore National Annex to Eurocode 7 SS EN 1997-1:2010. This paper presents an innovative way to rationalize the number of pile load test by adopting statistical analysis approach and looking at the coefficient of variance of pile elastic modulus using a case study at Singapore Tuas depot. Results are very promising and have shown that it is possible to reduce the number of working load test without influencing the reliability and confidence on the pile quality. Moving forward, it is suggested that more load test data from other geological formations to be examined to compare with the findings from this paper.

Keywords: elastic modulus of pile under soil interaction, jurong formation, kentledge test, pile load test

Procedia PDF Downloads 384
12196 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
12195 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format

Authors: Utkan Caliskan

Abstract:

Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.

Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP

Procedia PDF Downloads 318
12194 A Phase Field Approach to Model Crack Interface Interaction in Ceramic Matrix Composites

Authors: Dhaladhuli Pranavi, Amirtham Rajagopal

Abstract:

There are various failure modes in ceramic matrix composites; notable ones are fiber breakage, matrix cracking and fiber matrix debonding. Crack nucleation and propagation in microstructure of such composites requires an understanding of interaction of crack with the multiple inclusion heterogeneous system and interfaces. In order to assess structural integrity, the material parameters especially of the interface that governs the crack growth should be determined. In the present work, a nonlocal phase field approach is proposed to model the crack interface interaction in such composites. Nonlocal approaches help in understanding the complex mechanisms of delamination growth and mitigation and operates at a material length scale. The performance of the proposed formulation is illustrated through representative numerical examples. The model proposed is implemented in the framework of the finite element method. Several parametric studies on interface crack interaction are conducted. The proposed model is easy and simple to implement and works very well in modeling fracture in composite systems.

Keywords: composite, interface, nonlocal, phase field

Procedia PDF Downloads 141
12193 Eco-Literacy and Pedagogical Praxis in the Multidisciplinary University Greenhouse toward the Food Security Strengthening

Authors: Citlali Aguilera Lira, David Lynch Steinicke, Andrea León García

Abstract:

One of the challenges that higher education faces is to find how to approach the sustainability in an inclusive way to the student within all the different academic areas, how to move the sustainable development from the abstract field to the operational field. This research comes from the ecoliteracy and the pedagogical praxis as tools for rebuilding the teaching processes inside of universities. The purpose is to determine and describe which are the factors involved in the process of learning particularly in the Greenhouse-School Siembra UV. In the Greenhouse-School Siembra UV, of the University of Veracruz, are cultivated vegetables, medicinal plants and small cornfields under the usage of eco-technologies such as hydroponics, Wickingbed and Hugelkultur, which main purpose is the saving of space, labor and natural resources, as well as function as agricultural production alternatives in the urban and periurban zones. The sample was formed with students from different academic areas and who are actively involved in the greenhouse, as well as institutes from the University of Veracruz and governmental and non-governmental departments. This project comes from a pedagogic praxis approach, from filling the needs that the different professional profiles of the university students have. All this with the purpose of generate a pragmatic dialogue with the sustainability. It also comes from the necessity to understand the factors that intervene in the students’ praxis. In this manner is how the students are the fundamental unit in the sphere of sustainability. As a result, it is observed that those University of Veracruz students who are involved in the Greenhouse-school, Siembra UV, have enriched in different levels the sense of urban and periurban agriculture because of the diverse academic approaches they have and the interaction between them. It is concluded that the eco-technologies act as fundamental tools for ecoliteracy in society, where it is strengthen the nutritional and food security from a sustainable development approach.

Keywords: farming eco-technologies, food security, multidisciplinary, pedagogical praxis

Procedia PDF Downloads 316
12192 Machine Learning Based Gender Identification of Authors of Entry Programs

Authors: Go Woon Kwak, Siyoung Jun, Soyun Maeng, Haeyoung Lee

Abstract:

Entry is an education platform used in South Korea, created to help students learn to program, in which they can learn to code while playing. Using the online version of the entry, teachers can easily assign programming homework to the student and the students can make programs simply by linking programming blocks. However, the programs may be made by others, so that the authors of the programs should be identified. In this paper, as the first step toward author identification of entry programs, we present an artificial neural network based classification approach to identify genders of authors of a program written in an entry. A neural network has been trained from labeled training data that we have collected. Our result in progress, although preliminary, shows that the proposed approach could be feasible to be applied to the online version of entry for gender identification of authors. As future work, we will first use a machine learning technique for age identification of entry programs, which would be the second step toward the author identification.

Keywords: artificial intelligence, author identification, deep neural network, gender identification, machine learning

Procedia PDF Downloads 320
12191 Rights-Based Approach to Artificial Intelligence Design: Addressing Harm through Participatory ex ante Impact Assessment

Authors: Vanja Skoric

Abstract:

The paper examines whether the impacts of artificial intelligence (AI) can be meaningfully addressed through the rights-based approach to AI design, investigating in particular how the inclusive, participatory process of assessing the AI impact would make this viable. There is a significant gap between envisioning rights-based AI systems and their practical application. Plausibly, internalizing human rights approach within AI design process might be achieved through identifying and assessing implications of AI features human rights, especially considering the case of vulnerable individuals and communities. However, there is no clarity or consensus on how such an instrument should be operationalised to usefully identify the impact, mitigate harms and meaningfully ensure relevant stakeholders’ participation. In practice, ensuring the meaningful inclusion of those individuals, groups, or entire communities who are affected by the use of the AI system is a prerequisite for a process seeking to assess human rights impacts and risks. Engagement in the entire process of the impact assessment should enable those affected and interested to access information and better understand the technology, product, or service and resulting impacts, but also to learn about their rights and the respective obligations and responsibilities of developers and deployers to protect and/or respect these rights. This paper will provide an overview of the study and practice of the participatory design process for AI, including inclusive impact assessment, its main elements, propose a framework, and discuss the lessons learned from the existing theory. In addition, it will explore pathways for enhancing and promoting individual and group rights through such engagement by discussing when, how, and whom to include, at which stage of the process, and what are the pre-requisites for meaningful and engaging. The overall aim is to ensure using the technology that works for the benefit of society, individuals, and particular (historically marginalised) groups.

Keywords: rights-based design, AI impact assessment, inclusion, harm mitigation

Procedia PDF Downloads 150
12190 How Much for a Dancer? Culture Policy in Japan and Czech Republic towards Dance

Authors: Lucie Hayashi

Abstract:

This paper offers a view on a different approach towards a dancer´s career in two very dissimilar countries: on one hand Japan, an economic predator at the end of last century, but suffering under economic crisis from the beginning of the new century; and the Czech Republic, a post-communist country, caught up in capitalist fever from the 1990s on the other. The government’s approach towards culture and dance in these two countries not only has a different history and nature, but also presents a different take on the ideal future development in its respective dance scenes. The level of support from the state budget echoes in all the fields of a professional dance career, dance art and the education of the public towards dance. The message of the statistic data is clear: the production of an enormous number of well trained and expensively educated dancers with no jobs for them in Japan, and a lack of good dancers ready to fill state supported theatre companies in the Czech Republic (that gladly employs Japanese dancers). The paradigm leaves a big exclamation mark on the huge influence the policy has on dance in society, and a question mark on the ideal situation.

Keywords: culture policy, dance, education, employment, Czech Republic, Japan

Procedia PDF Downloads 163
12189 Investigating Learners’ Online Learning Experiences in a Blended-Learning School Environment

Authors: Abraham Ampong

Abstract:

BACKGROUND AND SIGNIFICANCE OF THE STUDY: The development of information technology and its influence today is inevitable in the world of education. The development of information technology and communication (ICT) has an impact on the use of teaching aids such as computers and the Internet, for example, E-learning. E-learning is a learning process attained through electronic means. But learning is not merely technology because learning is essentially more about the process of interaction between teacher, student, and source study. The main purpose of the study is to investigate learners’ online learning experiences in a blended learning approach, evaluate how learners’ experience of an online learning environment affects the blended learning approach and examine the future of online learning in a blended learning environment. Blended learning pedagogies have been recognized as a path to improve teacher’s instructional strategies for teaching using technology. Blended learning is perceived to have many advantages for teachers and students, including any-time learning, anywhere access, self-paced learning, inquiry-led learning and collaborative learning; this helps institutions to create desired instructional skills such as critical thinking in the process of learning. Blended learning as an approach to learning has gained momentum because of its widespread integration into educational organizations. METHODOLOGY: Based on the research objectives and questions of the study, the study will make use of the qualitative research approach. The rationale behind the selection of this research approach is that participants are able to make sense of their situations and appreciate their construction of knowledge and understanding because the methods focus on how people understand and interpret their experiences. A case study research design is adopted to explore the situation under investigation. The target population for the study will consist of selected students from selected universities. A simple random sampling technique will be used to select the targeted population. The data collection instrument that will be adopted for this study will be questions that will serve as an interview guide. An interview guide is a set of questions that an interviewer asks when interviewing respondents. Responses from the in-depth interview will be transcribed into word and analyzed under themes. Ethical issues to be catered for in this study include the right to privacy, voluntary participation, and no harm to participants, and confidentiality. INDICATORS OF THE MAJOR FINDINGS: It is suitable for the study to find out that online learning encourages timely feedback from teachers or that online learning tools are okay to use without issues. Most of the communication with the teacher can be done through emails and text messages. It is again suitable for sampled respondents to prefer online learning because there are few or no distractions. Learners can have access to technology to do other activities to support their learning”. There are, again, enough and enhanced learning materials available online. CONCLUSION: Unlike the previous research works focusing on the strengths and weaknesses of blended learning, the present study aims at the respective roles of its two modalities, as well as their interdependencies.

Keywords: online learning, blended learning, technologies, teaching methods

Procedia PDF Downloads 85
12188 Navigating Creditors' Interests in the Context of Business Rescue

Authors: Hermanus J. Moolman

Abstract:

The COVID-19 pandemic had a severe impact on the society and companies in South Africa. This raises questions about the position of creditors of companies facing financial distress and the actions that directors should take to cater to the interests of creditors. The extent to which directors owe their duties and consideration to creditors has been the subject of debate. The directors of a solvent company owe their duties to the company in favour of its shareholders. When the company becomes insolvent, creditors are the beneficiaries of the directors’ duties. However, the intermittent phase between solvency and insolvency, otherwise referred to as the realm of insolvency, is not accounted for. The purpose of this paper is to determine whether South African company law appropriately addresses the duties that directors owe to creditors and the extent of consideration given to creditors’ interests when the company is in the realm of insolvency and has started business rescue proceedings. A comparative study on South Africa, the United States of America, the United Kingdom and international instruments was employed to achieve the purpose statement. In the United States of America and the United Kingdom, the focus shifts from shareholders to the best interests of creditors when business recue proceedings commence. Such an approach is not aligned with the purpose of the Companies Act of 2008 that calls for a balance of interests of all persons affected by a company’s financial distress and will not be suitable for the South African context. Business rescue in South Africa is relatively new when compared to the practices of the United States of America and the United Kingdom, and the entrepreneurial landscape in South Africa is still evolving. The interests of creditors are not the only interests at risk when a company is financially distressed. It is recommended that an enlightened creditor value approach is adopted for South Africa, where the interests of creditors, albeit paramount, are balanced with those of other stakeholders. This approach optimises a gradual shift in the duties of directors from shareholders to creditors, without disregarding the interests of shareholders.

Keywords: business rescue, shareholders, creditors, financial distress, balance of interests, alternative remedies, company law

Procedia PDF Downloads 43
12187 A Neural Network Modelling Approach for Predicting Permeability from Well Logs Data

Authors: Chico Horacio Jose Sambo

Abstract:

Recently neural network has gained popularity when come to solve complex nonlinear problems. Permeability is one of fundamental reservoir characteristics system that are anisotropic distributed and non-linear manner. For this reason, permeability prediction from well log data is well suited by using neural networks and other computer-based techniques. The main goal of this paper is to predict reservoir permeability from well logs data by using neural network approach. A multi-layered perceptron trained by back propagation algorithm was used to build the predictive model. The performance of the model on net results was measured by correlation coefficient. The correlation coefficient from testing, training, validation and all data sets was evaluated. The results show that neural network was capable of reproducing permeability with accuracy in all cases, so that the calculated correlation coefficients for training, testing and validation permeability were 0.96273, 0.89991 and 0.87858, respectively. The generalization of the results to other field can be made after examining new data, and a regional study might be possible to study reservoir properties with cheap and very fast constructed models.

Keywords: neural network, permeability, multilayer perceptron, well log

Procedia PDF Downloads 402
12186 Modeling of Glycine Transporters in Mammalian Using the Probability Approach

Authors: K. S. Zaytsev, Y. R. Nartsissov

Abstract:

Glycine is one of the key inhibitory neurotransmitters in Central nervous system (CNS) meanwhile glycinergic transmission is highly dependable on its appropriate reuptake from synaptic cleft. Glycine transporters (GlyT) of types 1 and 2 are the enzymes providing glycine transport back to neuronal and glial cells along with Na⁺ and Cl⁻ co-transport. The distribution and stoichiometry of GlyT1 and GlyT2 differ in details, and GlyT2 is more interesting for the research as it reuptakes glycine to neuron cells, whereas GlyT1 is located in glial cells. In the process of GlyT2 activity, the translocation of the amino acid is accompanied with binding of both one chloride and three sodium ions consequently (two sodium ions for GlyT1). In the present study, we developed a computer simulator of GlyT2 and GlyT1 activity based on known experimental data for quantitative estimation of membrane glycine transport. The trait of a single protein functioning was described using the probability approach where each enzyme state was considered separately. Created scheme of transporter functioning realized as a consequence of elemental steps allowed to take into account each event of substrate association and dissociation. Computer experiments using up-to-date kinetic parameters allowed receiving the number of translocated glycine molecules, Na⁺ and Cl⁻ ions per time period. Flexibility of developed software makes it possible to evaluate glycine reuptake pattern in time under different internal characteristics of enzyme conformational transitions. We investigated the behavior of the system in a wide range of equilibrium constant (from 0.2 to 100), which is not determined experimentally. The significant influence of equilibrium constant in the range from 0.2 to 10 on the glycine transfer process is shown. The environmental conditions such as ion and glycine concentrations are decisive if the values of the constant are outside the specified range.

Keywords: glycine, inhibitory neurotransmitters, probability approach, single protein functioning

Procedia PDF Downloads 117
12185 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 295
12184 A Supervised Approach for Word Sense Disambiguation Based on Arabic Diacritics

Authors: Alaa Alrakaf, Sk. Md. Mizanur Rahman

Abstract:

Since the last two decades’ Arabic natural language processing (ANLP) has become increasingly much more important. One of the key issues related to ANLP is ambiguity. In Arabic language different pronunciation of one word may have a different meaning. Furthermore, ambiguity also has an impact on the effectiveness and efficiency of Machine Translation (MT). The issue of ambiguity has limited the usefulness and accuracy of the translation from Arabic to English. The lack of Arabic resources makes ambiguity problem more complicated. Additionally, the orthographic level of representation cannot specify the exact meaning of the word. This paper looked at the diacritics of Arabic language and used them to disambiguate a word. The proposed approach of word sense disambiguation used Diacritizer application to Diacritize Arabic text then found the most accurate sense of an ambiguous word using Naïve Bayes Classifier. Our Experimental study proves that using Arabic Diacritics with Naïve Bayes Classifier enhances the accuracy of choosing the appropriate sense by 23% and also decreases the ambiguity in machine translation.

Keywords: Arabic natural language processing, machine learning, machine translation, Naive bayes classifier, word sense disambiguation

Procedia PDF Downloads 356