Search results for: image encryption algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4683

Search results for: image encryption algorithms

513 The Anti-Angiogenic Effect of Tectorigenin in a Mouse Model of Retinopathy of Prematurity

Authors: KuiDong Kang, Hye Bin Yim, Su Ah Kim

Abstract:

Purpose: Tectorigenin is an isoflavone derived from the rhizome of Belamacanda chinensis. In this study, oxygen-induced retinopathy was used to characterize the anti-angiogenic properties of tectorigenin in mice. Methods: ICR neonatal mice were exposed to 75% oxygen from postnatal day P7 until P12 and returned to room air (21% oxygen) for five days (P12 to P17). Mice were subjected to daily intraperitoneal injection of tectorigenin (1 mg/kg, 10 mg/kg) and vehicle from P12 to P17. Retro-orbital injection of FITC-dextran was performed and retinal flat mounts were viewed by fluorescence microscopy. The Central avascular area was quantified from the digital images in a masked fashion using image analysis software (NIH ImageJ). Neovascular tufts were quantified by using SWIFT_NV and neovascular lumens were quantified from a histologic section in a masked fashion. Immunohistochemistry and Western blot analysis were also performed to demonstrate the anti-angiogenic activity of this compound in vivo. Results: In the retina of tectorigenin injected mouse (10mg/kg), the central non-perfusion area was significantly decreased compared to the vehicle injected group (1.76±0.5 mm2 vs 2.85±0.6 mm2, P<0.05). In vehicle-injected group, 33.45 ± 5.51% of the total retinal area was avascular, whereas the retinas of pups treated with high-dose (10 mg/kg) tectorigenin showed avascular retinal areas of 21.25 ±4.34% (P<0.05). High dose of tectorigenin also significantly reduced the number of vascular lumens in the histologic section. Tectorigenin (10 mg/kg) significantly reduced the expression of vascular endothelial growth factor (VEGF), matrix metalloproteinase-2 (MMP-2), MMP-9, and angiotensin II compared to the vehicle injected group. Tectorigenin did not affect CD31 abundance at any tested dose. Conclusions: Our results show that tectorigenin possesses powerful anti-angiogenic properties and can attenuate new vessel formation in the retina after systemic administration. These results imply that this compound can be considered as a candidate substance for therapeutic inhibition of retinal angiogenesis.

Keywords: tectorigenin, anti-angiogenic, retinopathy, Belamacanda chinensis

Procedia PDF Downloads 267
512 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 146
511 “CheckPrivate”: Artificial Intelligence Powered Mobile Application to Enhance the Well-Being of Sextual Transmitted Diseases Patients in Sri Lanka under Cultural Barriers

Authors: Warnakulasuriya Arachichige Malisha Ann Rosary Fernando, Udalamatta Gamage Omila Chalanka Jinadasa, Bihini Pabasara Amandi Amarasinghe, Manul Thisuraka Mandalawatta, Uthpala Samarakoon, Manori Gamage

Abstract:

The surge in sexually transmitted diseases (STDs) has become a critical public health crisis demanding urgent attention and action. Like many other nations, Sri Lanka is grappling with a significant increase in STDs due to a lack of education and awareness regarding their dangers. Presently, the available applications for tracking and managing STDs cover only a limited number of easily detectable infections, resulting in a significant gap in effectively controlling their spread. To address this gap and combat the rising STD rates, it is essential to leverage technology and data. Employing technology to enhance the tracking and management of STDs is vital to prevent their further propagation and to enable early intervention and treatment. This requires adopting a comprehensive approach that involves raising public awareness about the perils of STDs, improving access to affordable healthcare services for early detection and treatment, and utilizing advanced technology and data analysis. The proposed mobile application aims to cater to a broad range of users, including STD patients, recovered individuals, and those unaware of their STD status. By harnessing cutting-edge technologies like image detection, symptom-based identification, prevention methods, doctor and clinic recommendations, and virtual counselor chat, the application offers a holistic approach to STD management. In conclusion, the escalating STD rates in Sri Lanka and across the globe require immediate action. The integration of technology-driven solutions, along with comprehensive education and healthcare accessibility, is the key to curbing the spread of STDs and promoting better overall public health.

Keywords: STD, machine learning, NLP, artificial intelligence

Procedia PDF Downloads 81
510 Measuring Fragmentation Index of Urban Landscape: A Case Study on Kuala Lumpur City

Authors: Shagufta Tazin Shathy, Mohammad Imam Hasan Reza

Abstract:

Fragmentation due to urbanization and agricultural expansion has become the main reason for destruction of forest area and loss of biodiversity particularly in the developing world. At present, the world is experiencing the largest wave of urban growth in human history, and it is estimated that this influx will be mainly taking place in developing world. Therefore, study on urban fragmentation is vital for a sustainable urban development. Landscape fragmentation is one of the most important conservation issues in the last few decades. Habitat fragmentation due to landscape alteration has caused habitat isolation, destruction in ecosystem pattern and processes. Thus, this research analyses the spatial and temporal extent of urban fragmentation using landscape indices in the Kuala Lumpur (KL) – the capital and most populous city in Malaysia. The objective of this study is to examine the urban fragmentation index in KL city. Fragmentation metrics used in the study are: a) Urban landscape ratio (the ratio of urban landscape area and build up area), b) Infill (development that occurred within urbanized open space), and c) Extension (development of exterior open space). After analyzing all three metrics, these are calculated for the combined urban fragmentation index (UFI). In this combined index, all three metrics are given an equal weight. Land cover/ land use maps of the year 1996 and 2005 have been developed from the Landsat TM 30 m resolution satellite image. The year 1996 is taken as a reference year to analyze the changes. The UFI calculated for the year of 1996 and2005 found that the KL city has undergone rapid landscape changes destructing forest ecosystem adversely. Increasing UFI for the year of 1996 compared to 2005 indicates that the developmental activities have been occupying open spaces and fragmenting natural lands and forest. This index can be implemented in other unplanned and rapidly urbanizing Asian cities for example Dhaka and Delhi to calculate the urban fragmentation rate. The findings from the study will help the stakeholders and urban planners for a sustainable urban management planning in this region.

Keywords: GIS, index, sustainable urban management, urbanization

Procedia PDF Downloads 365
509 A Study on the Effect of Different Climate Conditions on Time of Balance of Bleeding and Evaporation in Plastic Shrinkage Cracking of Concrete Pavements

Authors: Hasan Ziari, Hassan Fazaeli, Seyed Javad Vaziri Kang Olyaei, Asma Sadat Dabiri

Abstract:

The presence of cracks in concrete pavements is a place for the ingression of corrosive substances, acids, oils, and water into the pavement and reduces its long-term durability and level of service. One of the causes of early cracks in concrete pavements is the plastic shrinkage. This shrinkage occurs due to the formation of negative capillary pressures after the equilibrium of the bleeding and evaporation rates at the pavement surface. These cracks form if the tensile stresses caused by the restrained shrinkage exceed the tensile strength of the concrete. Different climate conditions change the rate of evaporation and thus change the balance time of the bleeding and evaporation, which changes the severity of cracking in concrete. The present study examined the relationship between the balance time of bleeding and evaporation and the area of cracking in the concrete slabs using the standard method ASTM C1579 in 27 different environmental conditions by using continuous video recording and digital image analyzing. The results showed that as the evaporation rate increased and the balance time decreased, the crack severity significantly increased so that by reducing the balance time from the maximum value to its minimum value, the cracking area increased more than four times. It was also observed that the cracking area- balance time curve could be interpreted in three sections. An examination of these three parts showed that the combination of climate conditions has a significant effect on increasing or decreasing these two variables. The criticality of a single factor cannot cause the critical conditions of plastic cracking. By combining two mild environmental factors with a severe climate factor (in terms of surface evaporation rate), a considerable reduction in balance time and a sharp increase in cracking severity can be prevented. The results of this study showed that balance time could be an essential factor in controlling and predicting plastic shrinkage cracking in concrete pavements. It is necessary to control this factor in the case of constructing concrete pavements in different climate conditions.

Keywords: bleeding and cracking severity, concrete pavements, climate conditions, plastic shrinkage

Procedia PDF Downloads 146
508 Transient Freshwater-Saltwater Transition-Zone Dynamics in Heterogeneous Coastal Aquifers

Authors: Antoifi Abdoulhalik, Ashraf Ahmed

Abstract:

The ever growing threat of saltwater intrusion has prompted the need to further advance the understanding of underlying processes related to SWI for effective water resource management. While research efforts have mainly been focused on steady state analysis, studies on the transience of saltwater intrusion mechanism remain very scarce and studies considering transient SWI in heterogeneous medium are, as per our knowledge, simply inexistent. This study provides for the first time a quantitative analysis of the effect of both inland and coastal water level changes on the transition zone under transient conditions in layered coastal aquifer. In all, two sets of four experiments were completed, including a homogeneous case, and four layered cases: case LH and case HL presented were two bi-layered scenarios where a low K layer was set at the top and the bottom, respectively; case HLH and case LHL presented two stratified aquifers with High K–Low K–High K and Low K–High K– Low K pattern, respectively. Experimental automated image analysis technique was used here to quantify the main SWI parameters under high spatial and temporal resolution. The findings of this study provide an invaluable insight on the underlying processes responsible of transition zone dynamics in coastal aquifers. The results show that in all the investigated cases, the width of the transition zone remains almost unchanged throughout the saltwater intrusion process regardless of where the boundary change occurs. However, the results demonstrate that the width of the transition zone considerably increases during the retreat, with largest amplitude observed in cases LH and LHL, where a low K was set at the top of the system. In all the scenarios, the amplitude of widening was slightly smaller when the retreat was prompted by instantaneous drop of the saltwater level than when caused by inland freshwater rise, despite equivalent absolute head change magnitude. The magnitude of head change significantly caused larger widening during the saltwater wedge retreat, while having no impact during the intrusion phase.

Keywords: freshwater-saltwater transition-zone dynamics, heterogeneous coastal aquifers, laboratory experiments, transience seawater intrusion

Procedia PDF Downloads 241
507 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 65
506 Microchip-Integrated Computational Models for Studying Gait and Motor Control Deficits in Autism

Authors: Noah Odion, Honest Jimu, Blessing Atinuke Afuape

Abstract:

Introduction: Motor control and gait abnormalities are commonly observed in individuals with autism spectrum disorder (ASD), affecting their mobility and coordination. Understanding the underlying neurological and biomechanical factors is essential for designing effective interventions. This study focuses on developing microchip-integrated wearable devices to capture real-time movement data from individuals with autism. By applying computational models to the collected data, we aim to analyze motor control patterns and gait abnormalities, bridging a crucial knowledge gap in autism-related motor dysfunction. Methods: We designed microchip-enabled wearable devices capable of capturing precise kinematic data, including joint angles, acceleration, and velocity during movement. A cross-sectional study was conducted on individuals with ASD and a control group to collect comparative data. Computational modelling was applied using machine learning algorithms to analyse motor control patterns, focusing on gait variability, balance, and coordination. Finite element models were also used to simulate muscle and joint dynamics. The study employed descriptive and analytical methods to interpret the motor data. Results: The wearable devices effectively captured detailed movement data, revealing significant gait variability in the ASD group. For example, gait cycle time was 25% longer, and stride length was reduced by 15% compared to the control group. Motor control analysis showed a 30% reduction in balance stability in individuals with autism. Computational models successfully predicted movement irregularities and helped identify motor control deficits, particularly in the lower limbs. Conclusions: The integration of microchip-based wearable devices with computational models offers a powerful tool for diagnosing and treating motor control deficits in autism. These results have significant implications for patient care, providing objective data to guide personalized therapeutic interventions. The findings also contribute to the broader field of neuroscience by improving our understanding of the motor dysfunctions associated with ASD and other neurodevelopmental disorders.

Keywords: motor control, gait abnormalities, autism, wearable devices, microchips, computational modeling, kinematic analysis, neurodevelopmental disorders

Procedia PDF Downloads 24
505 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 409
504 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 259
503 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: cost prediction, machine learning, project management, random forest, neural networks

Procedia PDF Downloads 56
502 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 262
501 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 387
500 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 118
499 Media, Politics and Power in the Representation of the Refugee and Migration Crisis in Europe

Authors: Evangelia-Matroni Tomara

Abstract:

This thesis answers the question whether the media representations and reporting in 2015-2016 - especially, after the image of the drowned three-year-old Syrian boy in the Mediterranean Sea which made global headlines in the beginning of September 2015 -, the European Commission regulatory sources material and related reporting, have the power to challenge the conceptualization of humanitarianism or even redefine it. The theoretical foundations of the thesis are based on humanitarianism and its core definitions, the power of media representations and the relative portrayal of migrants, refugees and/or asylum seekers, as well as the dominant migration discourse and EU migration governance. Using content analysis for the media portrayal of migrants (436 newspaper articles) and qualitative content analysis for the European Commission Communication documents from May 2015 until June 2016 that required various depths of interpretation, this thesis allowed us to revise the concept of humanitarianism, realizing that the current crisis may seem to be a turning point for Europe but is not enough to overcome the past hostile media discourses and suppress the historical perspective of security and control-oriented EU migration policies. In particular, the crisis helped to shift the intensity of hostility and the persistence in the state-centric, border-oriented securitization in Europe into a narration of victimization rather than threat where mercy and charity dynamics are dominated and into operational mechanisms, noting the emergency of immediate management of the massive migrations flows, respectively. Although, the understanding of a rights-based response to the ongoing migration crisis, is being followed discursively in both political and media stage, the nexus described, points out that the binary between ‘us’ and ‘them’ still exists, with only difference that the ‘invaders’ are now ‘pathetic’ but still ‘invaders’. In this context, the migration crisis challenges the concept of humanitarianism because rights dignify migrants as individuals only in a discursive or secondary level while the humanitarian work is mostly related with the geopolitical and economic interests of the ‘savior’ states.

Keywords: European Union politics, humanitarianism, immigration, media representation, policy-making, refugees, security studies

Procedia PDF Downloads 293
498 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery

Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod

Abstract:

Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.

Keywords: isocenter verification, quality assurance, EPID, SRS

Procedia PDF Downloads 152
497 Classification of Forest Types Using Remote Sensing and Self-Organizing Maps

Authors: Wanderson Goncalves e Goncalves, José Alberto Silva de Sá

Abstract:

Human actions are a threat to the balance and conservation of the Amazon forest. Therefore the environmental monitoring services play an important role as the preservation and maintenance of this environment. This study classified forest types using data from a forest inventory provided by the 'Florestal e da Biodiversidade do Estado do Pará' (IDEFLOR-BIO), located between the municipalities of Santarém, Juruti and Aveiro, in the state of Pará, Brazil, covering an area approximately of 600,000 hectares, Bands 3, 4 and 5 of the TM-Landsat satellite image, and Self - Organizing Maps. The information from the satellite images was extracted using QGIS software 2.8.1 Wien and was used as a database for training the neural network. The midpoints of each sample of forest inventory have been linked to images. Later the Digital Numbers of the pixels have been extracted, composing the database that fed the training process and testing of the classifier. The neural network was trained to classify two forest types: Rain Forest of Lowland Emerging Canopy (Dbe) and Rain Forest of Lowland Emerging Canopy plus Open with palm trees (Dbe + Abp) in the Mamuru Arapiuns glebes of Pará State, and the number of examples in the training data set was 400, 200 examples for each class (Dbe and Dbe + Abp), and the size of the test data set was 100, with 50 examples for each class (Dbe and Dbe + Abp). Therefore, total mass of data consisted of 500 examples. The classifier was compiled in Orange Data Mining 2.7 Software and was evaluated in terms of the confusion matrix indicators. The results of the classifier were considered satisfactory, and being obtained values of the global accuracy equal to 89% and Kappa coefficient equal to 78% and F1 score equal to 0,88. It evaluated also the efficiency of the classifier by the ROC plot (receiver operating characteristics), obtaining results close to ideal ratings, showing it to be a very good classifier, and demonstrating the potential of this methodology to provide ecosystem services, particularly in anthropogenic areas in the Amazon.

Keywords: artificial neural network, computational intelligence, pattern recognition, unsupervised learning

Procedia PDF Downloads 361
496 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 40
495 Research on Teachers’ Perceptions on the Usability of Classroom Space: Analysis of a Nation-Wide Questionnaire Survey in Japan

Authors: Masayuki Mori

Abstract:

This study investigates the relationship between teachers’ perceptions of the usability of classroom space and various elements, including both physical and non-physical, of classroom environments. With the introduction of the GIGA School funding program in Japan in 2019, understanding its impact on learning in classroom space is crucial. The program enabled local educational authorities (LEA) to make it possible to provide one PC/tablet for each student of both elementary and junior high schools. Moreover, at the same time, the program also supported LEA to purchase other electronic devices for educational purposes such as electronic whiteboards, large displays, and real image projectors. A nationwide survey was conducted using random sampling methodology among 100 junior high schools to collect data on classroom space. Of those, 60 schools responded to the survey. The survey covered approximately fifty items, including classroom space size, class size, and educational electronic devices owned. After the data compilation, statistical analysis was used to identify correlations between the variables and to explore the extent to which classroom environment elements influenced teachers’ perceptions. Furthermore, decision tree analysis was applied to visualize the causal relationships between the variables. The findings indicate a significant negative correlation between class size and teachers’ evaluation of usability. In addition to the class size, the way students stored their belongings also influenced teachers’ perceptions. As for the placement of educational electronic devices, the installation of a projector produced a small negative correlation with teachers’ perceptions. The study suggests that while the GIGA School funding program is not significantly influential, traditional educational conditions such as class size have a greater impact on teachers’ perceptions of the usability of classroom space. These results highlight the need for awareness and strategies to integrate various elements in designing the learning environment of the classroom for teachers and students to improve their learning experience.

Keywords: classroom space, GIGA School, questionnaire survey, teachers’ perceptions

Procedia PDF Downloads 21
494 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 89
493 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose

Authors: Mariamawit T. Belete

Abstract:

Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.

Keywords: sorghum anthracnose, data mining, case based reasoning, integration

Procedia PDF Downloads 82
492 Neural Networks Models for Measuring Hotel Users Satisfaction

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.

Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring

Procedia PDF Downloads 136
491 Towards a Critical Disentanglement of the ‘Religion’ Nexus in the Global East

Authors: Daan F. Oostveen

Abstract:

‘Religion’ as a term is not native to the Global East. The concept ‘religion’ is both understood in its meaning of ‘religious traditions’, commonly referring to the ‘World Religions’ and in its adjective meaning ‘the religious’ or ‘religiosity’ as a separate domain of human culture, commonly contrasted to the secular. Though neither of these understandings are native to the historical worldviews of East Asia, their development in modern Western scholarship has had an enormous impact on the self-understanding of cultural diversity in the Global East as well. One example is the identification and therefore elevation to the status of World Religion of ‘Buddhism’ which connected formerly dispersed religious practices throughout the Global East and subsumed them under this powerful label. On the other hand, we see how popular religiosity, shamanism and hybrid cultural expressions have become excluded from genuine religion; this had an immense impact on the sense of legitimacy of these practices, which became sometimes labeled as superstition are rejected as magic. Our theoretical frameworks on religion in the Global East do not always consider the complex power dynamics between religious actors, both elites and lay expressions of religion in everyday life, governments and religious studies scholars. In order to get a clear image of how religiosity functions in the context of the Global East, we have to take into account these power dynamics. What is important in particular is the issue of religious identity or absence of religious identity. The self-understanding of religious actors in the Global East is often very different from what scholars of religion observe. Religious practice, from an etic perspective, is often unrelated to religious identification from an emic perspective. But we also witness the rise of Christian churches in the Global East, in which religious identity and belonging does play a pivotal role. Finally, religion in the Global East has since the beginning of the 20th Century been conceptualized as the ‘other’ or republicanism or Marxist-Maoist ideology. It is important not to deny the key role of colonial thinking in the process of religion formation in the Global East. In this paper, it is argued that religious realities constituted emerging as a result from our theory of religion, and that these religious realities in turn inform our theory. Therefore, the relationship between phenomenology of religion and theory of religion can never be disentangled. In fact, we have to acknowledge that our conceptualizations of religious diversity are always already influenced by our valuation of those cultural expressions that we have come to call ‘religious’.

Keywords: global east, religion, religious belonging, secularity

Procedia PDF Downloads 136
490 Relevance Of Cognitive Rehabilitation Amongst Children Having Chronic Illnesses – A Theoretical Analysis

Authors: Pulari C. Milu Maria Anto

Abstract:

Background: Cognitive Rehabilitation/Retraining has been variously used in the research literature to represent non-pharmacological interventions that target the cognitive impairments with the goal of ameliorating cognitive function and functional behaviors to optimize the quality of life. Along with adult’s cognitive impairments, the need to address acquired cognitive impairments (due to any chronic illnesses like CHD - congenital heart diseases or ALL - Acute Lymphoblastic Leukemia) among child populations is inevitable. Also, it has to be emphasized as same we consider the cognitive impairments seen in the children having neurodevelopmental disorders. Methods: All published brain image studies (Hermann, B. et al,2002, Khalil, A. et al., 2004, Follin, C. et al, 2016, etc.) and studies emphasizing cognitive impairments in attention, memory, and/or executive function and behavioral aspects (Henkin, Y. et al,2007, Bellinger, D. C., & Newburger, J. W. (2010), Cheung, Y. T., et al,2016, that could be identified were reviewed. Based on a systematic review of the literature from (2000 -2021) different brain imaging studies, increased risk of neuropsychological and psychosocial impairments are briefly described. Clinical and research gap in the area is discussed. Results:30 papers, both Indian studies and foreign publications (Sage journals, Delhi psychiatry journal, Wiley Online Library, APA PsyNet, Springer, Elsevier, Developmental medicine, and child neurology), were identified. Conclusions: In India, a very limited number of brain imaging studies and neuropsychological studies have done by indicating the cognitive deficits of a child having or undergone chronic illness. None of the studies have emphasized the relevance nor the need of implementingCR among such children, even though its high time to address but still not established yet. The review of the current evidence is to bring out an insight among rehabilitation professionals in establishing a child specific CR and to publish new findings regarding the implementation of CR among such children. Also, this study will be an awareness on considering cognitive aspects of a child having acquired cognitive deficit (due to chronic illness), especially during their critical developmental period.

Keywords: cognitive rehabilitation, neuropsychological impairments, congenital heart diseases, acute lymphoblastic leukemia, epilepsy, and neuroplasticity

Procedia PDF Downloads 180
489 Generation of ZnO-Au Nanocomposite in Water Using Pulsed Laser Irradiation

Authors: Elmira Solati, Atousa Mehrani, Davoud Dorranian

Abstract:

Generation of ZnO-Au nanocomposite under laser irradiation of a mixture of the ZnO and Au colloidal suspensions are experimentally investigated. In this work, firstly ZnO and Au nanoparticles are prepared by pulsed laser ablation of the corresponding metals in water using the 1064 nm wavelength of Nd:YAG laser. In a second step, the produced ZnO and Au colloidal suspensions were mixed in different volumetric ratio and irradiated using the second harmonic of a Nd:YAG laser operating at 532 nm wavelength. The changes in the size of the nanostructure and optical properties of the ZnO-Au nanocomposite are studied as a function of the volumetric ratio of ZnO and Au colloidal suspensions. The crystalline structure of the ZnO-Au nanocomposites was analyzed by X-ray diffraction (XRD). The optical properties of the samples were examined at room temperature by a UV-Vis-NIR absorption spectrophotometer. Transmission electron microscopy (TEM) was done by placing a drop of the concentrated suspension on a carbon-coated copper grid. To further confirm the morphology of ZnO-Au nanocomposites, we performed Scanning electron microscopy (SEM) analysis. Room temperature photoluminescence (PL) of the ZnO-Au nanocomposites was measured to characterize the luminescence properties of the ZnO-Au nanocomposites. The ZnO-Au nanocomposites were characterized by Fourier transform infrared (FTIR) spectroscopy. The X-ray diffraction pattern shows that the ZnO-Au nanocomposites had the polycrystalline structure of Au. The behavior observed by images of transmission electron microscope reveals that soldering of Au and ZnO nanoparticles include their adhesion. The plasmon peak in ZnO-Au nanocomposites was red-shifted and broadened in comparison with pure Au nanoparticles. By using the Tauc’s equation, the band gap energy for ZnO-Au nanocomposites is calculated to be 3.15–3.27 eV. In this work, the formation of ZnO-Au nanocomposites shifts the FTIR peak of metal oxide bands to higher wavenumbers. PL spectra of the ZnO-Au nanocomposites show that several weak peaks in the ultraviolet region and several relatively strong peaks in the visible region. SEM image indicates that the morphology of ZnO-Au nanocomposites produced in water was spherical. The TEM images of ZnO-Au nanocomposites demonstrate that with increasing the volumetric ratio of Au colloidal suspension the adhesion increased. According to the size distribution graphs of ZnO-Au nanocomposites with increasing the volumetric ratio of Au colloidal suspension the amount of ZnO-Au nanocomposites with the smaller size is further.

Keywords: Au nanoparticles, pulsed laser ablation, ZnO-Au nanocomposites, ZnO nanoparticles

Procedia PDF Downloads 344
488 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method

Authors: Mohamad R. Moshtagh, Ahmad Bagheri

Abstract:

Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.

Keywords: fault detection, gearbox, machine learning, wiener method

Procedia PDF Downloads 80
487 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery

Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr

Abstract:

The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.

Keywords: bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability

Procedia PDF Downloads 279
486 Merchants’ Attitudes towards Tourism Development in Mahane Yehuda Market: A Case Study

Authors: Rotem Mashkov, Noam Shoval

Abstract:

In an age when a tourist’s gaze is more focused on the daily lives of locals, it is evident that local food markets are being rediscovered. Traditional urban markets succeed in reinventing themselves as a space for consumption, recreation, and culture, enabling authentic experiences and interpersonal interactions with the local culture. Alongside this, the pressure of tourism development may result in commercialization and retail gentrification to the point of losing the sense of local identity. The issue of finding a balance between tourism development and the preservation of unique local features is at the heart of this study and is being tested using the case of the Mahane Yehuda market in Jerusalem. The research question—how merchants respond to tourism development in the Mahane Yehuda food market— focuses on local traders, a group of players who are usually absent from the research arenas, although they influence tourism development as well as influenced by it. Three main research methods were integrated into this study. The first two methods, a survey of articles survey and comparative mapping of the business mix, were used to characterize the changes in the Mahane Yehuda market both consciously and physically. The third research method, involving in-depth interviews with merchants, was used to examine the traders' attitudes and responses to tourism development. The findings indicate that there has been a turnaround in the market image over the past decade and a half. Additionally, there has been a significant physical change in the business mix, reflected by a decline of 15% in the number of stalls selling food products and delicacies. The data from the interviews on the traders’ attitudes towards tourism development were inconclusive; there were disagreements among the traders about the economic contribution of tourism development in relation to their dependence on the tourism industry. However, there was a consensus on the need for authentic elements in the marketplace. The findings of the study also indicate a strong link between the merchants’ response to tourism development and their stall ownership status as the merchant could exercise their position in various ways depending on the possession type.

Keywords: business mix, Jerusalem, local food markets, Mahane Yehuda market, merchants’ attitude, ownership status, retail gentrification, tourism development, traditional urban markets

Procedia PDF Downloads 135
485 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.

Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost

Procedia PDF Downloads 9
484 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants

Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver

Abstract:

This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.

Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants

Procedia PDF Downloads 352