Search results for: search algorithms
676 Dynamic Reliability for a Complex System and Process: Application on Offshore Platform in Mozambique
Authors: Raed KOUTA, José-Alcebiades-Ernesto HLUNGUANE, Eric Châtele
Abstract:
The search for and exploitation of new fossil energy resources is taking place in the context of the gradual depletion of existing deposits. Despite the adoption of international targets to combat global warming, the demand for fuels continues to grow, contradicting the movement towards an energy-efficient society. The increase in the share of offshore in global hydrocarbon production tends to compensate for the depletion of terrestrial reserves, thus constituting a major challenge for the players in the sector. Through the economic potential it represents, and the energy independence it provides, offshore exploitation is also a challenge for States such as Mozambique, which have large maritime areas and whose environmental wealth must be considered. The exploitation of new reserves on economically viable terms depends on available technologies. The development of deep and ultra-deep offshore requires significant research and development efforts. Progress has also been made in managing the multiple risks inherent in this activity. Our study proposes a reliability approach to develop products and processes designed to live at sea. Indeed, the context of an offshore platform requires highly reliable solutions to overcome the difficulties of access to the system for regular maintenance and quick repairs and which must resist deterioration and degradation processes. One of the characteristics of failures that we consider is the actual conditions of use that are considered 'extreme.' These conditions depend on time and the interactions between the different causes. These are the two factors that give the degradation process its dynamic character, hence the need to develop dynamic reliability models. Our work highlights mathematical models that can explicitly manage interactions between components and process variables. These models are accompanied by numerical resolution methods that help to structure a dynamic reliability approach in a physical and probabilistic context. The application developed makes it possible to evaluate the reliability, availability, and maintainability of a floating storage and unloading platform for liquefied natural gas production.Keywords: dynamic reliability, offshore plateform, stochastic process, uncertainties
Procedia PDF Downloads 120675 Infrared Spectroscopy in Tandem with Machine Learning for Simultaneous Rapid Identification of Bacteria Isolated Directly from Patients' Urine Samples and Determination of Their Susceptibility to Antibiotics
Authors: Mahmoud Huleihel, George Abu-Aqil, Manal Suleiman, Klaris Riesenberg, Itshak Lapidot, Ahmad Salman
Abstract:
Urinary tract infections (UTIs) are considered to be the most common bacterial infections worldwide, which are caused mainly by Escherichia (E.) coli (about 80%). Klebsiella pneumoniae (about 10%) and Pseudomonas aeruginosa (about 6%). Although antibiotics are considered as the most effective treatment for bacterial infectious diseases, unfortunately, most of the bacteria already have developed resistance to the majority of the commonly available antibiotics. Therefore, it is crucial to identify the infecting bacteria and to determine its susceptibility to antibiotics for prescribing effective treatment. Classical methods are time consuming, require ~48 hours for determining bacterial susceptibility. Thus, it is highly urgent to develop a new method that can significantly reduce the time required for determining both infecting bacterium at the species level and diagnose its susceptibility to antibiotics. Fourier-Transform Infrared (FTIR) spectroscopy is well known as a sensitive and rapid method, which can detect minor molecular changes in bacterial genome associated with the development of resistance to antibiotics. The main goal of this study is to examine the potential of FTIR spectroscopy, in tandem with machine learning algorithms, to identify the infected bacteria at the species level and to determine E. coli susceptibility to different antibiotics directly from patients' urine in about 30minutes. For this goal, 1600 different E. coli isolates were isolated for different patients' urine sample, measured by FTIR, and analyzed using different machine learning algorithm like Random Forest, XGBoost, and CNN. We achieved 98% success in isolate level identification and 89% accuracy in susceptibility determination.Keywords: urinary tract infections (UTIs), E. coli, Klebsiella pneumonia, Pseudomonas aeruginosa, bacterial, susceptibility to antibiotics, infrared microscopy, machine learning
Procedia PDF Downloads 170674 The Role and Effects of Communication on Occupational Safety: A Review
Authors: Pieter A. Cornelissen, Joris J. Van Hoof
Abstract:
The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).Keywords: communication, feedback, occupational safety, review
Procedia PDF Downloads 302673 Climate Species Lists: A Combination of Methods for Urban Areas
Authors: Andrea Gion Saluz, Tal Hertig, Axel Heinrich, Stefan Stevanovic
Abstract:
Higher temperatures, seasonal changes in precipitation, and extreme weather events are increasingly affecting trees. To counteract the increasing challenges of urban trees, strategies are increasingly being sought to preserve existing tree populations on the one hand and to prepare for the coming years on the other. One such strategy lies in strategic climate tree species selection. The search is on for species or varieties that can cope with the new climatic conditions. Many efforts in German-speaking countries deal with this in detail, such as the tree lists of the German Conference of Garden Authorities (GALK), the project Stadtgrün 2021, or the instruments of the Climate Species Matrix by Prof. Dr. Roloff. In this context, different methods for a correct species selection are offered. One possibility is to select certain physiological attributes that indicate the climate resilience of a species. To calculate the dissimilarity of the present climate of different geographic regions in relation to the future climate of any city, a weighted (standardized) Euclidean distance (SED) for seasonal climate values is calculated for each region of the Earth. The calculation was performed in the QGIS geographic information system, using global raster datasets on monthly climate values in the 1981-2010 standard period. Data from a European forest inventory were used to identify tree species growing in the calculated analogue climate regions. The inventory used is the compilation of georeferenced point data at a 1 km grid resolution on the occurrence of tree species in 21 European countries. In this project, the results of the methodological application are shown for the city of Zurich for the year 2060. In the first step, analog climate regions based on projected climate values for the measuring station Kirche Fluntern (ZH) were searched for. In a further step, the methods mentioned above were applied to generate tree species lists for the city of Zurich. These lists were then qualitatively evaluated with respect to the suitability of the different tree species for the Zurich area to generate a cleaned and thus usable list of possible future tree species.Keywords: climate change, climate region, climate tree, urban tree
Procedia PDF Downloads 106672 Determining the Presence of Brucella abortus Antibodies by the Indirect Elisa Method in Bovine Bulk Milk and Risk Factors in the Peri-Urban Zones of Bamenda Cameroon
Authors: Cha-ah C. N., Awah N. J., Mouiche M. M. M.
Abstract:
Brucellosis is a neglected zoonotic disease of animals and man caused by bacteria of genus Brucella. Though eradicated in some parts of the world, it remains endemic in sub-Saharan Africa including Cameroon. The aim of this study was to contribute to the epidemiology of brucellosis in the North-West region of Cameroon by detecting the presence of anti-Brucella antibodies in bovine bulk milk as this serves as a route of transmission from animals to man. A cross sectional study was conducted to determine the prevalence of Brucella abortus antibodies in bovine bulk milk in the peri-urban zones of Bamenda. One hundred bulk milk samples were collected from 100 herds and tested by milk I-ELISA test. The conducted study revealed the presence of anti-Brucella abortus antibodies in bovine bulk milk. The study revealed that bovine brucellosis is widespread in animal production systems in this area. The animal infection pressure in these systems has remained strong due to movement of livestock in search of pasture, co-existence of animal husbandry, communal sharing of grazing land, concentration of animals around water points, abortions in production systems, locality of production systems and failure to quarantine upon introduction of new animals. The circulation of Brucella abortus antibodies in cattle farms recorded in the study revealed potential public health implication and suggest economic importance of brucellosis to the cattle industry in the Northwest region of Cameroon. The risk for re-emergence and transmission of brucellosis is evident as a result of the co-existence of animal husbandry activities and social-cultural activities that promote brucellosis transmission. Well-designed countrywide, evidence-based studies of brucellosis are needed. These could help to generate reliable frequency and potential impact estimates, to identify Brucella reservoirs, and to propose control strategies of proven efficacy.Keywords: brucellosis, bulk milk, northwest region Cameroon, prevalence
Procedia PDF Downloads 147671 Contribution of Artificial Intelligence in the Studies of Natural Compounds Against SARS-COV-2
Authors: Salah Belaidi
Abstract:
We have carried out extensive and in-depth research to search for bioactive compounds based on Algerian plants. A selection of 50 ligands from Algerian medicinal plants. Several compounds used in herbal medicine have been drawn using Marvin Sketch software. We determined the three-dimensional structures of the ligands with the MMFF94 force field in order to prepare these ligands for molecular docking. The 3D protein structure of the SARS-CoV-2 main protease was taken from the Protein Data Bank. We used AutoDockVina software to apply molecular docking. The hydrogen atoms were added during the molecular docking process, and all the twist bonds of the ligands were added using the (ligand) module in the AutoDock software. The COVID-19 main protease (Mpro) is a key enzyme that plays a vital role in viral transcription and mediating replication, so it is a very attractive drug target for SARS-CoV-2. In this work, an evaluation was carried out on the biologically active compounds present in these selected medicinal plants as effective inhibitors of the protease enzyme of COVID-19, with an in-depth computational calculation of the molecular docking using the Autodock Vina software. The top 7 ligands: Phloroglucinol, Afzelin, Myricetin-3-O- rutinosidTricin 7-neohesperidoside, Silybin, Silychristinthat and Kaempferol are selected among the 50 molecules studied which are Algerian medicinal plants, whose selection is based on the best binding energy which is relatively low compared to the reference molecule with binding affinities of -9.3, -9.3, -9, -8.9, -8 .5, 8.3 and -8.3 kcal mol-1 respectively. Then, we analyzed the ADME properties of the best7 ligands using the web server SwissADME. Two ligands (Silybin, Silychristin) were found to be potential candidates for the discovery and design of novel drug inhibitors of the protease enzyme of SARS-CoV-2. The stability of the two ligands in complexing with the Mpro protease was validated by molecular dynamics simulation; they revealed a stable trajectory in both techniques, RMSD and RMSF, by showing molecular properties with coherent interactions in molecular dynamics simulations. Finally, we conclude that the Silybin ligand forms a more stable complex with the Mpro protease compared to the Silychristin ligand.Keywords: COVID-19, medicinal plants, molecular docking, ADME properties, molecular dynamics
Procedia PDF Downloads 35670 A Review of Lexical Retrieval Intervention in Primary Progressive Aphasia and Alzheimer's Disease: Mechanisms of Change, Cognition, and Generalisation
Authors: Ashleigh Beales, Anne Whitworth, Jade Cartwright
Abstract:
Background: While significant benefits of lexical retrieval intervention are evident within the Primary Progressive Aphasia (PPA) and Alzheimer’s disease (AD) literature, an understanding of the mechanisms that underlie change or improvement is limited. Change mechanisms have been explored in the non-progressive post-stroke literature that may offer insight into how interventions affect change with progressive language disorders. The potential influences of cognitive factors may also play a role here, interacting with the aims of intervention. Exploring how such processes have been applied is likely to grow our understanding of how interventions have, or have not, been effective, and how and why generalisation is likely, or not, to occur. Aims: This review of the literature aimed to (1) investigate the proposed mechanisms of change which underpin lexical interventions, mapping the PPA and AD lexical retrieval literature to theoretical accounts of mechanisms that underlie change within the broader intervention literature, (2) identify whether and which nonlinguistic cognitive functions have been engaged in intervention with these populations and any proposed influence, and (3) explore evidence of linguistic generalisation, with particular reference to change mechanisms employed in interventions. Main contribution: A search of Medline, PsycINFO, and CINAHL identified 36 articles that reported data for individuals with PPA or AD following lexical retrieval intervention. A review of the mechanisms of change identified 10 studies that used stimulation, 21 studies utilised relearning, three studies drew on reorganisation, and two studies used cognitive-relay. Significant treatment gains, predominantly based on linguistic performance measures, were reported for all client groups for each of the proposed mechanisms. Reorganisation and cognitive-relay change mechanisms were only targeted in PPA. Eighteen studies incorporated nonlinguistic cognitive functions in intervention; these were limited to autobiographical memory (16 studies), episodic memory (three studies), or both (one study). Linguistic generalisation outcomes were inconsistently reported in PPA and AD studies. Conclusion: This review highlights that individuals with PPA and AD may benefit from lexical retrieval intervention, irrespective of the mechanism of change. Thorough application of a theory of intervention is required to gain a greater understanding of the change mechanisms, as well as the interplay of nonlinguistic cognitive functions.Keywords: Alzheimer's disease, lexical retrieval, mechanisms of change, primary progressive aphasia
Procedia PDF Downloads 203669 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 327668 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 249667 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models
Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti
Abstract:
In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics
Procedia PDF Downloads 53666 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 222665 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images
Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu
Abstract:
The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.Keywords: level set model, multi-temporal image, lake contour extraction, contour update
Procedia PDF Downloads 366664 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 141663 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 106662 Improved Intracellular Protein Degradation System for Rapid Screening and Quantitative Study of Essential Fungal Proteins in Biopharmaceutical Development
Authors: Patarasuda Chaisupa, R. Clay Wright
Abstract:
The selection of appropriate biomolecular targets is a crucial aspect of biopharmaceutical development. The Auxin-Inducible Degron Degradation (AID) technology has demonstrated remarkable potential in efficiently and rapidly degrading target proteins, thereby enabling the identification and acquisition of drug targets. The AID system also offers a viable method to deplete specific proteins, particularly in cases where the degradation pathway has not been exploited or when the adaptation of proteins, including the cell environment, occurs to compensate for the mutation or gene knockout. In this study, we have engineered an improved AID system tailored to deplete proteins of interest. This AID construct combines the auxin-responsive E3 ubiquitin ligase binding domain, AFB2, and the substrate degron, IAA17, fused to the target genes. Essential genes of fungi with the lowest percent amino acid similarity to human and plant orthologs, according to the Basic Local Alignment Search Tool (BLAST), were cloned into the AID construct in S. cerevisiae (AID-tagged strains) using a modular yeast cloning toolkit for multipart assembly and direct genetic modification. Each E3 ubiquitin ligase and IAA17 degron was fused to a fluorescence protein, allowing for real-time monitoring of protein levels in response to different auxin doses via cytometry. Our AID system exhibited high sensitivity, with an EC50 value of 0.040 µM (SE = 0.016) for AFB2, enabling the specific promotion of IAA17::target protein degradation. Furthermore, we demonstrate how this improved AID system enhances quantitative functional studies of various proteins in fungi. The advancements made in auxin-inducible protein degradation in this study offer a powerful approach to investigating critical target protein viability in fungi, screening protein targets for drugs, and regulating intracellular protein abundance, thus revolutionizing the study of protein function underlying a diverse range of biological processes.Keywords: synthetic biology, bioengineering, molecular biology, biotechnology
Procedia PDF Downloads 92661 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: heuristic, MIP model, remedial course, school, timetabling
Procedia PDF Downloads 605660 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System
Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple
Abstract:
This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation
Procedia PDF Downloads 104659 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification
Authors: Hung-Sheng Lin, Cheng-Hsuan Li
Abstract:
Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction
Procedia PDF Downloads 344658 Children of Quarantine: A Post COVID-19 Mental Health Dilemma
Authors: Salman Abdul Majeed, Vidur Solanki, Ruqiya Shama Tareen
Abstract:
BACKGROUND: The COVID-19 pandemic has affected the way of living as we have known for all strata of society. While disease containment measures imposed by governmental agencies have been instrumental in controlling the spread of the virus, it has had profound collateral impacts on all populations. However, the disruption caused in the lives of one segment of population has been far more damaging than most others: the emotional wellbeing of our child and adolescent populations. This impact was even more pronounced in children who already suffered from neurodevelopmental or psychiatric disorders. In particular, school closures have not only led to profound social isolation, but also negative impacts on normal developmental opportunities and interruptions in mental health services obtained through school systems. It is too soon to understand the full impacts of quarantine, isolation, stress of social detachment and fear of pandemic, but we have started to see the devastating impact on C&A already. This review intends to shed light on the current understanding of psychiatric wellbeing of C&A during COVID-19 pandemic. METHOD: Literature search utilizing key words COVID-19 and children, quarantine and children, social isolation, Loneliness, pandemic stress and children, and mental health of children, disease containment measures was carried out. Over 200 articles were identified, out of which 81 articles were included in this review article. RESULTS: The disruption caused by COVID-19 in the lives of C&A is much more damaging and its impact is far reaching. The C&A ED visits for possible suicide attempts have jumped to 22.3% in 2020 and 39.1% during 2021. One study utilizing T1-weighted structural images, computed the thickness of cortical and subcortical structures including amygdala, hippocampus, and nucleus accumbens. The Peri-COVID group showed reduced cortical and subcortical thickness and more advanced brain aging compared to pre pandemic studies. CONCLUSION: Mental health resources for C&A remain under funded, neglected, and inaccessible to population that needs it most. Children with ongoing mental health disorders were impacted worst, along with those with predisposed biopsychosocial risk factors.Keywords: COVID-19 and children, quarantine and children, social isolation, Loneliness, pandemic stress and children, disease containment measures, mental health of children
Procedia PDF Downloads 75657 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 198656 Clinical Advice Services: Using Lean Chassis to Optimize Nurse-Driven Telephonic Triage of After-Hour Calls from Patients
Authors: Eric Lee G. Escobedo-Wu, Nidhi Rohatgi, Fouzel Dhebar
Abstract:
It is challenging for patients to navigate through healthcare systems after-hours. This leads to delays in care, patient/provider dissatisfaction, inappropriate resource utilization, readmissions, and higher costs. It is important to provide patients and providers with effective clinical decision-making tools to allow seamless connectivity and coordinated care. In August 2015, patient-centric Stanford Health Care established Clinical Advice Services (CAS) to provide clinical decision support after-hours. CAS is founded on key Lean principles: Value stream mapping, empathy mapping, waste walk, takt time calculations, standard work, plan-do-check-act cycles, and active daily management. At CAS, Clinical Assistants take the initial call and manage all non-clinical calls (e.g., appointments, directions, general information). If the patient has a clinical symptom, the CAS nurses take the call and utilize standardized clinical algorithms to triage the patient to home, clinic, urgent care, emergency department, or 911. Nurses may also contact the on-call physician based on the clinical algorithm for further direction and consultation. Since August 2015, CAS has managed 228,990 calls from 26 clinical specialties. Reporting is built into the electronic health record for analysis and data collection. 65.3% of the after-hours calls are clinically related. Average clinical algorithm adherence rate has been 92%. An average of 9% of calls was escalated by CAS nurses to the physician on call. An average of 5% of patients was triaged to the Emergency Department by CAS. Key learnings indicate that a seamless connectivity vision, cascading, multidisciplinary ownership of the problem, and synergistic enterprise improvements have contributed to this success while striving for continuous improvement.Keywords: after hours phone calls, clinical advice services, nurse triage, Stanford Health Care
Procedia PDF Downloads 174655 Applying Multiple Kinect on the Development of a Rapid 3D Mannequin Scan Platform
Authors: Shih-Wen Hsiao, Yi-Cheng Tsao
Abstract:
In the field of reverse engineering and creative industries, applying 3D scanning process to obtain geometric forms of the objects is a mature and common technique. For instance, organic objects such as faces and nonorganic objects such as products could be scanned to acquire the geometric information for further application. However, although the data resolution of 3D scanning device is increasing and there are more and more abundant complementary applications, the penetration rate of 3D scanning for the public is still limited by the relative high price of the devices. On the other hand, Kinect, released by Microsoft, is known for its powerful functions, considerably low price, and complete technology and database support. Therefore, related studies can be done with the applying of Kinect under acceptable cost and data precision. Due to the fact that Kinect utilizes optical mechanism to extracting depth information, limitations are found due to the reason of the straight path of the light. Thus, various angles are required sequentially to obtain the complete 3D information of the object when applying a single Kinect for 3D scanning. The integration process which combines the 3D data from different angles by certain algorithms is also required. This sequential scanning process costs much time and the complex integration process often encounter some technical problems. Therefore, this paper aimed to apply multiple Kinects simultaneously on the field of developing a rapid 3D mannequin scan platform and proposed suggestions on the number and angles of Kinects. In the content, a method of establishing the coordination based on the relation between mannequin and the specifications of Kinect is proposed, and a suggestion of angles and number of Kinects is also described. An experiment of applying multiple Kinect on the scanning of 3D mannequin is constructed by Microsoft API, and the results show that the time required for scanning and technical threshold can be reduced in the industries of fashion and garment design.Keywords: 3D scan, depth sensor, fashion and garment design, mannequin, multiple Kinect sensor
Procedia PDF Downloads 366654 An Unusual Cause of Electrocardiographic Artefact: Patient's Warming Blanket
Authors: Sanjay Dhiraaj, Puneet Goyal, Aditya Kapoor, Gaurav Misra
Abstract:
In electrocardiography, an ECG artefact is used to indicate something that is not heart-made. Although technological advancements have produced monitors with the potential of providing accurate information and reliable heart rate alarms, despite this, interference of the displayed electrocardiogram still occurs. These interferences can be from the various electrical gadgets present in the operating room or electrical signals from other parts of the body. Artefacts may also occur due to poor electrode contact with the body or due to machine malfunction. Knowing these artefacts is of utmost importance so as to avoid unnecessary and unwarranted diagnostic as well as interventional procedures. We report a case of ECG artefacts occurring due to patient warming blanket and its consequences. A 20-year-old male with a preoperative diagnosis of exstrophy epispadias complex was posted for surgery under epidural and general anaesthesia. Just after endotracheal intubation, we observed nonspecific ECG changes on the monitor. At a first glance, the monitor strip revealed broad QRs complexes suggesting a ventricular bigeminal rhythm. Closer analysis revealed these to be artefacts because although the complexes were looking broad on the first glance there was clear presence of normal sinus complexes which were immediately followed by 'broad complexes' or artefacts produced by some device or connection. These broad complexes were labeled as artefacts as they were originating in the absolute refractory period of the previous normal sinus beat. It would be physiologically impossible for the myocardium to depolarize so rapidly as to produce a second QRS complex. A search for the possible reason for the artefacts was made and after deepening the plane of anaesthesia, ruling out any possible electrolyte abnormalities, checking of ECG leads and its connections, changing monitors, checking all other monitoring connections, checking for proper grounding of anaesthesia machine and OT table, we found that after switching off the patient’s warming apparatus the rhythm returned to a normal sinus one and the 'broad complexes' or artefacts disappeared. As misdiagnosis of ECG artefacts may subject patients to unnecessary diagnostic and therapeutic interventions so a thorough knowledge of the patient and monitors allow for a quick interpretation and resolution of the problem.Keywords: ECG artefacts, patient warming blanket, peri-operative arrhythmias, mobile messaging services
Procedia PDF Downloads 272653 A Comparative Analysis of Clustering Approaches for Understanding Patterns in Health Insurance Uptake: Evidence from Sociodemographic Kenyan Data
Authors: Nelson Kimeli Kemboi Yego, Juma Kasozi, Joseph Nkruzinza, Francis Kipkogei
Abstract:
The study investigated the low uptake of health insurance in Kenya despite efforts to achieve universal health coverage through various health insurance schemes. Unsupervised machine learning techniques were employed to identify patterns in health insurance uptake based on sociodemographic factors among Kenyan households. The aim was to identify key demographic groups that are underinsured and to provide insights for the development of effective policies and outreach programs. Using the 2021 FinAccess Survey, the study clustered Kenyan households based on their health insurance uptake and sociodemographic features to reveal patterns in health insurance uptake across the country. The effectiveness of k-prototypes clustering, hierarchical clustering, and agglomerative hierarchical clustering in clustering based on sociodemographic factors was compared. The k-prototypes approach was found to be the most effective at uncovering distinct and well-separated clusters in the Kenyan sociodemographic data related to health insurance uptake based on silhouette, Calinski-Harabasz, Davies-Bouldin, and Rand indices. Hence, it was utilized in uncovering the patterns in uptake. The results of the analysis indicate that inclusivity in health insurance is greatly related to affordability. The findings suggest that targeted policy interventions and outreach programs are necessary to increase health insurance uptake in Kenya, with the ultimate goal of achieving universal health coverage. The study provides important insights for policymakers and stakeholders in the health insurance sector to address the low uptake of health insurance and to ensure that healthcare services are accessible and affordable to all Kenyans, regardless of their socio-demographic status. The study highlights the potential of unsupervised machine learning techniques to provide insights into complex health policy issues and improve decision-making in the health sector.Keywords: health insurance, unsupervised learning, clustering algorithms, machine learning
Procedia PDF Downloads 138652 The Impact of Adopting Cross Breed Dairy Cows on Households’ Income and Food Security in the Case of Dejen Woreda, Amhara Region, Ethiopia
Authors: Misganaw Chere Siferih
Abstract:
This study assessed the impact of crossbreed dairy cows on household income and food security. The study area is found in Dejen Woreda, East Gojam Zone, and Amhara region of Ethiopia. Random sampling technique was used to obtain a sample of 80 crossbreed dairy cow owners and 176 indigenous dairy cow owners. The study employed food consumption score analytical framework to measure food security status of the household. No Statistical significant mean difference is found between crossbreed owners and indigenous owners. Logistic regression was employed to investigate crossbreed dairy cow adoption determinants , the result indicates that gender, education, labor number, land size cultivated, dairy cooperatives membership, net income and food security status of the household are statistically significant independent variables, which explained the binary dependent variable, crossbreed dairy cow adoption. Propensity score matching (PSM) was employed to analyze the impact of crossbreed dairy cow owners on farmers’ income and food security. The average net income of crossbreed dairy cow owners was found to be significantly higher than indigenous dairy cow owners. Estimates of average treatment effect of the treated (ATT) indicated that crossbreed dairy cow is able to impact households’ net income by 42%, 38.5%, 30.8% and 44.5% higher in kernel, radius, nearest neighborhood and stratification matching algorithms respectively as compared to indigenous dairy cow owners. However, estimates of average treatment of the treated (ATT) suggest that being an owner of crossbreed dairy cow is not able to affect food security significantly. Thus, crossbreed dairy cow enables farmers to increase income but not their food security in the study area. Finally, the study recommended establishing dairy cooperatives and advice farmers to become a member of them, attention to promoting the impact of crossbreed dairy cows and promotion of nutrition focus projects.Keywords: crossbreed dairy cow, net income, food security, propensity score matching
Procedia PDF Downloads 65651 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach
Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic
Abstract:
The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning
Procedia PDF Downloads 185650 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model
Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf
Abstract:
Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV
Procedia PDF Downloads 126649 Probing Environmental Sustainability via Brownfield Remediation: A Framework to Manage Brownfields in Ethiopia Lesson to Africa
Authors: Mikiale Gebreslase Gebremariam, Chai Huaqi, Tesfay Gebretsdkan Gebremichael, Dawit Nega Bekele
Abstract:
In recent years, brownfield redevelopment projects (BRPs) have contributed to the overarching paradigm of the United Nations 2030 agendas. In the present circumstance, most developed nations adopted BRPs, an efficacious urban policy tool. However, in developing and some advanced countries, BRPs are lacking due to limitations of awareness, policy tools, and financial capability for cleaning up brownfield sites. For example, the growth and development of Ethiopian cities were achieved at the cost of poor urban planning, including no community consultations and excessive urbanization for future growth. The demand for land resources is more and more urgent as the result of an intermigration to major cities and towns for socio-economic reasons and population growth. In the past, the development mode of spreading major cities has made horizontal urbanizations stretching outwards. Expansion in search of more land resources, while the outer cities are growing, the inner cities are polluted by environmental pollution. It is noteworthy that the rapid development of cities has not brought about an increase in people's happiness index. Thus, the proposed management framework for managing brownfields in Ethiopia as a lesson to the developing nation facing similar challenges and growth will add immense value in solving the problems and give insights into brownfield land utilization. Under the umbrella of the grey incidence decision-making model and with the consideration of multiple stakeholders and tight environmental and economic constraints, the proposed management framework integrates different criteria from economic, social, environmental, technical, and risk aspects into the grey incidence decision-making model and gives useful guidance to manage brownfields in Ethiopia. Furthermore, it will contribute to the future development of the social economy and the missions of the 2030 UN sustainable development goals.Keywords: Brownfields, environmental sustainability, Ethiopia, grey-incidence decision-making, sustainable urban development
Procedia PDF Downloads 91648 Direct Oxidation Synthesis for a Dual-Layer Silver/Silver Orthophosphate with Controllable Tetrahedral Structure as an Active Photoanode for Solar-Driven Photoelectrochemical Water Splitting
Authors: Wen Cai Ng, Saman Ilankoon, Meng Nan Chong
Abstract:
The vast increase in global energy demand, coupled with the growing concerns on environmental issues, has triggered the search for cleaner alternative energy sources. In view of this, the photoelectrochemical (PEC) water splitting offers a sustainable hydrogen (H2) production route that only requires solar energy, water, and PEC system operating in an ambient environment. However, the current advancement of PEC water splitting technologies is still far from the commercialization benchmark indicated by the solar-to-H2 (STH) efficiency of at least 10 %. This is largely due to the shortcomings of photoelectrodes used in the PEC system, such as the rapid recombination of photogenerated charge carriers and limited photo-responsiveness in the visible-light spectrum. Silver orthophosphate (Ag3PO4) possesses many desirable intrinsic properties for the fabrication into photoanode used in PEC systems, such as narrow bandgap of 2.4 eV and low valence band (VB) position. Hence, in this study, a highly efficient Ag3PO4-based photoanode was synthesized and characterized. The surface of the Ag foil substrate was directly oxidized to fabricate a top layer composed of {111}-bound Ag3PO4 tetrahedrons layer with a porous structure, forming the dual-layer Ag/Ag3PO4 photoanode. Furthermore, the key synthesis parameters were systematically investigated by varying the concentration ratio of capping agent-to-precursor (R), the volume ratio of hydrogen peroxide (H2O2)-to-water, and reaction period. Results showed that the optimized dual-layer Ag/Ag3PO4 photoanode achieved a photocurrent density as high as 4.19 mA/cm2 at 1 V vs. Ag/AgCl for the R-value of 4, the volume ratio of H2O2-to-water of 3:5 and 20 h reaction period. The current work provides a solid foundation for further nanoarchitecture modification strategies on Ag3PO4-based photoanodes for more efficient PEC water splitting applications. This piece of information needs to be backed up by evidence; therefore, you need to provide a reference. As the abstract should be self-contained, all information requiring a reference should be removed. This is a fact known to the area of research, and not necessarily required a reference to support.Keywords: solar-to-hydrogen fuel, photoelectrochemical water splitting, photoelectrode, silver orthophosphate
Procedia PDF Downloads 121647 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse
Authors: Veronika Sharok
Abstract:
Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.Keywords: existential values, prevention, psychological features, risky behavior
Procedia PDF Downloads 256