Search results for: vulnerability intelligence
519 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 134518 Efficiency and Reliability Analysis of SiC-Based and Si-Based DC-DC Buck Converters in Thin-Film PV Systems
Authors: Elaid Bouchetob, Bouchra Nadji
Abstract:
This research paper compares the efficiency and reliability (R(t)) of SiC-based and Si-based DC-DC buck converters in thin layer PV systems with an AI-based MPPT controller. Using Simplorer/Simulink simulations, the study assesses their performance under varying conditions. Results show that the SiC-based converter outperforms the Si-based one in efficiency and cost-effectiveness, especially in high temperature and low irradiance conditions. It also exhibits superior reliability, particularly at high temperature and voltage. Reliability calculation (R(t)) is analyzed to assess system performance over time. The SiC-based converter demonstrates better reliability, considering factors like component failure rates and system lifetime. The research focuses on the buck converter's role in charging a Lithium battery within the PV system. By combining the SiC-based converter and AI-based MPPT controller, higher charging efficiency, improved reliability, and cost-effectiveness are achieved. The SiC-based converter proves superior under challenging conditions, emphasizing its potential for optimizing PV system charging. These findings contribute insights into the efficiency, reliability, and reliability calculation of SiC-based and Si-based converters in PV systems. SiC technology's advantages, coupled with advanced control strategies, promote efficient and sustainable energy storage using Lithium batteries. The research supports PV system design and optimization for reliable renewable energy utilization.Keywords: efficiency, reliability, artificial intelligence, sic device, thin layer, buck converter
Procedia PDF Downloads 63517 Effect of Phonological Complexity in Children with Specific Language Impairment
Authors: Irfana M., Priyandi Kabasi
Abstract:
Children with specific language impairment (SLI) have difficulty acquiring and using language despite having all the requirements of cognitive skills to support language acquisition. These children have normal non-verbal intelligence, hearing, and oral-motor skills, with no history of social/emotional problems or significant neurological impairment. Nevertheless, their language acquisition lags behind their peers. Phonological complexity can be considered to be the major factor that causes the inaccurate production of speech in this population. However, the implementation of various ranges of complex phonological stimuli in the treatment session of SLI should be followed for a better prognosis of speech accuracy. Hence there is a need to study the levels of phonological complexity. The present study consisted of 7 individuals who were diagnosed with SLI and 10 developmentally normal children. All of them were Hindi speakers with both genders and their age ranged from 4 to 5 years. There were 4 sets of stimuli; among them were minimal contrast vs maximal contrast nonwords, minimal coarticulation vs maximal coarticulation nonwords, minimal contrast vs maximal contrast words and minimal coarticulation vs maximal coarticulation words. Each set contained 10 stimuli and participants were asked to repeat each stimulus. Results showed that production of maximal contrast was significantly accurate, followed by minimal coarticulation, minimal contrast and maximal coarticulation. A similar trend was shown for both word and non-word categories of stimuli. The phonological complexity effect was evident in the study for each participant group. Moreover, present study findings can be implemented for the management of SLI, specifically for the selection of stimuli.Keywords: coarticulation, minimal contrast, phonological complexity, specific language impairment
Procedia PDF Downloads 144516 An Application of Quantile Regression to Large-Scale Disaster Research
Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede
Abstract:
Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.Keywords: disaster workers, post traumatic stress, PTSD, quantile regression
Procedia PDF Downloads 285515 A Large Language Model-Driven Method for Automated Building Energy Model Generation
Authors: Yake Zhang, Peng Xu
Abstract:
The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.Keywords: artificial intelligence, building energy modelling, building simulation, large language model
Procedia PDF Downloads 28514 Next-Gen Solutions: How Generative AI Will Reshape Businesses
Authors: Aishwarya Rai
Abstract:
This study explores the transformative influence of generative AI on startups, businesses, and industries. We will explore how large businesses can benefit in the area of customer operations, where AI-powered chatbots can improve self-service and agent effectiveness, greatly increasing efficiency. In marketing and sales, generative AI could transform businesses by automating content development, data utilization, and personalization, resulting in a substantial increase in marketing and sales productivity. In software engineering-focused startups, generative AI can streamline activities, significantly impacting coding processes and work experiences. It can be extremely useful in product R&D for market analysis, virtual design, simulations, and test preparation, altering old workflows and increasing efficiency. Zooming into the retail and CPG industry, industry findings suggest a 1-2% increase in annual revenues, equating to $400 billion to $660 billion. By automating customer service, marketing, sales, and supply chain management, generative AI can streamline operations, optimizing personalized offerings and presenting itself as a disruptive force. While celebrating economic potential, we acknowledge challenges like external inference and adversarial attacks. Human involvement remains crucial for quality control and security in the era of generative AI-driven transformative innovation. This talk provides a comprehensive exploration of generative AI's pivotal role in reshaping businesses, recognizing its strategic impact on customer interactions, productivity, and operational efficiency.Keywords: generative AI, digital transformation, LLM, artificial intelligence, startups, businesses
Procedia PDF Downloads 78513 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 9512 Freudian Psychoanalysis Towards an Ethics of Finitude
Authors: Katya E. Manalastas
Abstract:
This thesis is a dialogue with Freud about vulnerability and any forms of transience we encounter in life. This study argues that Freud’s Ethics of Finitude, which is framed within the psychoanalytic context, is a critical theory about how human beings fail to become what they are because of their attachment to their illusions—to their visions of perfection and immortality. Freud’s Ethics of Finitude positions itself between our detachment to ideals and recognition of our own death through our loved one. His texts portray the predicament of the finite individual who suffers from feelings of guilt and anxiety because of his failure to live up to the demands of his idealistic civilized society. The civilized society has overestimated men’s susceptibility to culture. It imposes excessive sublimation, conformity to rigid moral ideals, and instinctive repression to manage human aggression. However, by doing this, civilization becomes a main source of men’s suffering. The lack of instinctive freedom will result in a community of tamed but unhappy people. Civilization has also constructed theories and measures to rule out death and pain from the realities of life. Therefore, a man lives his life repressing his instincts and ignorant of his own mortality. For Freud, war and neurosis are just few of the consequences of a civilization that imprisons the individual from cultural hypocrisy instead of giving more play to truthfulness. The occurrence of Great War destroyed our pride in the attainments of civilization and let loose the hostile impulses within us which we thought had been totally eradicated by means of instinctive repression and sublimation. War destroyed most of the things that we had loved and showed us the impermanence of all the things that we had deemed perfect and everlasting. This chaotic event also revealed the damaging impact of our attachment to past values that no longer bind us; our futile attempts to escape suffering; and our refusal to confront the painfulness of loss and mourning. With this given backdrop, this study launches Freud’s Ethics of Finitude—which culminates not in the submission of an individual to the unquestioned authority nor in the blind optimism and love for illusory happiness but in the pedagogy of mourning which brings forth the authentic education of man towards the truth about himself. His Ethics of Finitude is a form of labor in and through which the individual steps out of the realm of illusions and ideals that hinder him to confront his imperfections and accept the difficulties of existence. Through his analysis of the Great War, Freud seeks to awaken in us our ability to evaluate the way we see ourselves and to live our lives with death in mind. His Ethics of Finitude leads us to the fulfillment of our first duty as a living being, which is to endure life. We can only endure life if we are prepared to die and let go.Keywords: critical theory, ethics of finitude, psychoanalysis, Sigmund Freud
Procedia PDF Downloads 87511 Contribution at Dimensioning of the Energy Dissipation Basin
Authors: M. Aouimeur
Abstract:
The environmental risks of a dam and particularly the security in the Valley downstream of it,, is a very complex problem. Integrated management and risk-sharing become more and more indispensable. The definition of "vulnerability “concept can provide assistance to controlling the efficiency of protective measures and the characterization of each valley relatively to the floods's risk. Security can be enhanced through the integrated land management. The social sciences may be associated to the operational systems of civil protection, in particular warning networks. The passage of extreme floods in the site of the dam causes the rupture of this structure and important damages downstream the dam. The river bed could be damaged by erosion if it is not well protected. Also, we may encounter some scouring and flooding problems in the downstream area of the dam. Therefore, the protection of the dam is crucial. It must have an energy dissipator in a specific place. The basin of dissipation plays a very important role for the security of the dam and the protection of the environment against floods downstream the dam. It allows to dissipate the potential energy created by the dam with the passage of the extreme flood on the weir and regularize in a natural manner and with more security the discharge or elevation of the water plan on the crest of the weir, also it permits to reduce the speed of the flow downstream the dam, in order to obtain an identical speed to the river bed. The problem of the dimensioning of a classic dissipation basin is in the determination of the necessary parameters for the dimensioning of this structure. This communication presents a simple graphical method, that is fast and complete, and a methodology which determines the main features of the hydraulic jump, necessary parameters for sizing the classic dissipation basin. This graphical method takes into account the constraints imposed by the reality of the terrain or the practice such as the one related to the topography of the site, the preservation of the environment equilibrium and the technical and economic side.This methodology is to impose the loss of head DH dissipated by the hydraulic jump as a hypothesis (free design) to determine all the others parameters of classical dissipation basin. We can impose the loss of head DH dissipated by the hydraulic jump that is equal to a selected value or to a certain percentage of the upstream total head created by the dam. With the parameter DH+ =(DH/k),(k: critical depth),the elaborate graphical representation allows to find the other parameters, the multiplication of these parameters by k gives the main characteristics of the hydraulic jump, necessary parameters for the dimensioning of classic dissipation basin.This solution is often preferred for sizing the dissipation basins of small concrete dams. The results verification and their comparison to practical data, confirm the validity and reliability of the elaborate graphical method.Keywords: dimensioning, energy dissipation basin, hydraulic jump, protection of the environment
Procedia PDF Downloads 584510 Rural Women in Serbia: Key Challenges in Enjoyment of Economic and Social Rights
Authors: Mirjana Dokmanovic
Abstract:
In recent years, the disadvantaged and marginalised position of rural women in the Republic of Serbia has been recognised in a number of national strategies and policy papers. A number of measures have been adopted by the government aimed at economic empowerment of rural women and eliminating barriers to accessing decision making and economic and social opportunities. However, their implementation pace is still slow. The aim of the paper is to indicate the necessity of a comprehensive policy approach to eliminating discrimination against rural women that would include policy and financial commitments for enhancing agricultural and rural development as a whole, instead of taking fragmented measures targeting consequences instead of causes. The paper introduces main findings of the study of challenges, constraints, and opportunities of rural women in Serbia to enjoy their economic and social rights. The research methodology included the desk research and the qualitative analysis of the available data, statistics, policy papers, studies, and reports produced by the government, ministries and other governmental bodies, independent human rights bodies, and civil society organizations (CSOs). The findings of the study reveal that rural women are at great risk of poverty, particularly in remote areas, and when getting old or widowed. Young rural women working in agriculture are also in unfavorable position, as they do not have opportunities to enjoy their rights during pregnancy and maternity leave, childcare leave and leave due to the special care of a child. The study indicates that the main causes of their unfavorable position are related to the prevalent patriarchal surrounding and economic and social underdevelopment of rural areas in Serbia. Gender inequalities have been particularly present in accessing land and property rights, inheritance, education, social protection, healthcare, and decision making. Women living in the rural areas are exposed at high risk of discrimination in all spheres of public and private life that undermine their enjoyment of basic economic, social and cultural rights. The vulnerability of rural women to discrimination increases in cases of the intersectionality of other grounds of discrimination, such as disability, ethnicity, age, health condition and sexual discrimination. If they are victims of domestic violence, their experience lack of access to shelters and protection services. Despite the State’s recognition of the marginalized position of rural women, there is still a lack of a comprehensive policy approach to improving the economic and social position of rural women.Keywords: agricultural and rural development, care economy, discrimination against women, economic and social rights, feminization of poverty, Republic of Serbia, rural women
Procedia PDF Downloads 262509 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring
Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau
Abstract:
The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems
Procedia PDF Downloads 200508 Artificial Intelligence Assisted Sentiment Analysis of Hotel Reviews Using Topic Modeling
Authors: Sushma Ghogale
Abstract:
With a surge in user-generated content or feedback or reviews on the internet, it has become possible and important to know consumers' opinions about products and services. This data is important for both potential customers and businesses providing the services. Data from social media is attracting significant attention and has become the most prominent channel of expressing an unregulated opinion. Prospective customers look for reviews from experienced customers before deciding to buy a product or service. Several websites provide a platform for users to post their feedback for the provider and potential customers. However, the biggest challenge in analyzing such data is in extracting latent features and providing term-level analysis of the data. This paper proposes an approach to use topic modeling to classify the reviews into topics and conduct sentiment analysis to mine the opinions. This approach can analyse and classify latent topics mentioned by reviewers on business sites or review sites, or social media using topic modeling to identify the importance of each topic. It is followed by sentiment analysis to assess the satisfaction level of each topic. This approach provides a classification of hotel reviews using multiple machine learning techniques and comparing different classifiers to mine the opinions of user reviews through sentiment analysis. This experiment concludes that Multinomial Naïve Bayes classifier produces higher accuracy than other classifiers.Keywords: latent Dirichlet allocation, topic modeling, text classification, sentiment analysis
Procedia PDF Downloads 97507 Artificial Intelligent-Based Approaches for Task Offloading, Resource Allocation and Service Placement of Internet of Things Applications: State of the Art
Authors: Fatima Z. Cherhabil, Mammar Sedrati, Sonia-Sabrina Bendib
Abstract:
In order to support the continued growth, critical latency of IoT applications, and various obstacles of traditional data centers, mobile edge computing (MEC) has emerged as a promising solution that extends cloud data-processing and decision-making to edge devices. By adopting a MEC structure, IoT applications could be executed locally, on an edge server, different fog nodes, or distant cloud data centers. However, we are often faced with wanting to optimize conflicting criteria such as minimizing energy consumption of limited local capabilities (in terms of CPU, RAM, storage, bandwidth) of mobile edge devices and trying to keep high performance (reducing response time, increasing throughput and service availability) at the same time. Achieving one goal may affect the other, making task offloading (TO), resource allocation (RA), and service placement (SP) complex processes. It is a nontrivial multi-objective optimization problem to study the trade-off between conflicting criteria. The paper provides a survey on different TO, SP, and RA recent multi-objective optimization (MOO) approaches used in edge computing environments, particularly artificial intelligent (AI) ones, to satisfy various objectives, constraints, and dynamic conditions related to IoT applications.Keywords: mobile edge computing, multi-objective optimization, artificial intelligence approaches, task offloading, resource allocation, service placement
Procedia PDF Downloads 117506 Bridge Health Monitoring: A Review
Authors: Mohammad Bakhshandeh
Abstract:
Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis
Procedia PDF Downloads 90505 SEAWIZARD-Multiplex AI-Enabled Graphene Based Lab-On-Chip Sensing Platform for Heavy Metal Ions Monitoring on Marine Water
Authors: M. Moreno, M. Alique, D. Otero, C. Delgado, P. Lacharmoise, L. Gracia, L. Pires, A. Moya
Abstract:
Marine environments are increasingly threatened by heavy metal contamination, including mercury (Hg), lead (Pb), and cadmium (Cd), posing significant risks to ecosystems and human health. Traditional monitoring techniques often fail to provide the spatial and temporal resolution needed for real-time detection of these contaminants, especially in remote or harsh environments. SEAWIZARD addresses these challenges by leveraging the flexibility, adaptability, and cost-effectiveness of printed electronics, with the integration of microfluidics to develop a compact, portable, and reusable sensor platform designed specifically for real-time monitoring of heavy metal ions in seawater. The SEAWIZARD sensor is a multiparametric Lab-on-Chip (LoC) device, a miniaturized system that integrates several laboratory functions into a single chip, drastically reducing sample volumes and improving adaptability. This platform integrates three printed graphene electrodes for the simultaneous detection of Hg, Cd and Pb via square wave voltammetry. These electrodes share the reference and the counter electrodes to improve space efficiency. Additionally, it integrates printed pH and temperature sensors to correct environmental interferences that may impact the accuracy of metal detection. The pH sensor is based on a carbon electrode with iridium oxide electrodeposited while the temperature sensor is graphene based. A protective dielectric layer is printed on top of the sensor to safeguard it in harsh marine conditions. The use of flexible polyethylene terephthalate (PET) as the substrate enables the sensor to conform to various surfaces and operate in challenging environments. One of the key innovations of SEAWIZARD is its integrated microfluidic layer, fabricated from cyclic olefin copolymer (COC). This microfluidic component allows a controlled flow of seawater over the sensing area, allowing for significant improved detection limits compared to direct water sampling. The system’s dual-channel design separates the detection of heavy metals from the measurement of pH and temperature, ensuring that each parameter is measured under optimal conditions. In addition, the temperature sensor is finely tuned with a serpentine-shaped microfluidic channel to ensure precise thermal measurements. SEAWIZARD also incorporates custom electronics that allow for wireless data transmission via Bluetooth, facilitating rapid data collection and user interface integration. Embedded artificial intelligence further enhances the platform by providing an automated alarm system, capable of detecting predefined metal concentration thresholds and issuing warnings when limits are exceeded. This predictive feature enables early warnings of potential environmental disasters, such as industrial spills or toxic levels of heavy metal pollutants, making SEAWIZARD not just a detection tool, but a comprehensive monitoring and early intervention system. In conclusion, SEAWIZARD represents a significant advancement in printed electronics applied to environmental sensing. By combining flexible, low-cost materials with advanced microfluidics, custom electronics, and AI-driven intelligence, SEAWIZARD offers a highly adaptable and scalable solution for real-time, high-resolution monitoring of heavy metals in marine environments. Its compact and portable design makes it an accessible, user-friendly tool with the potential to transform water quality monitoring practices and provide critical data to protect marine ecosystems from contamination-related risks.Keywords: lab-on-chip, printed electronics, real-time monitoring, microfluidics, heavy metal contamination
Procedia PDF Downloads 34504 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions
Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly
Abstract:
Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability
Procedia PDF Downloads 89503 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 421502 Bias Prevention in Automated Diagnosis of Melanoma: Augmentation of a Convolutional Neural Network Classifier
Authors: Kemka Ihemelandu, Chukwuemeka Ihemelandu
Abstract:
Melanoma remains a public health crisis, with incidence rates increasing rapidly in the past decades. Improving diagnostic accuracy to decrease misdiagnosis using Artificial intelligence (AI) continues to be documented. Unfortunately, unintended racially biased outcomes, a product of lack of diversity in the dataset used, with a noted class imbalance favoring lighter vs. darker skin tone, have increasingly been recognized as a problem.Resulting in noted limitations of the accuracy of the Convolutional neural network (CNN)models. CNN models are prone to biased output due to biases in the dataset used to train them. Our aim in this study was the optimization of convolutional neural network algorithms to mitigate bias in the automated diagnosis of melanoma. We hypothesized that our proposed training algorithms based on a data augmentation method to optimize the diagnostic accuracy of a CNN classifier by generating new training samples from the original ones will reduce bias in the automated diagnosis of melanoma. We applied geometric transformation, including; rotations, translations, scale change, flipping, and shearing. Resulting in a CNN model that provided a modifiedinput data making for a model that could learn subtle racial features. Optimal selection of the momentum and batch hyperparameter increased our model accuracy. We show that our augmented model reduces bias while maintaining accuracy in the automated diagnosis of melanoma.Keywords: bias, augmentation, melanoma, convolutional neural network
Procedia PDF Downloads 213501 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking
Authors: Jonas Colin
Abstract:
Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.Keywords: chatbot, GPT 3.5, metacognition, symbiose
Procedia PDF Downloads 72500 Criteria to Access Justice in Remote Criminal Trial Implementation
Authors: Inga Žukovaitė
Abstract:
This work aims to present postdoc research on remote criminal proceedings in court in order to streamline the proceedings and, at the same time, ensure the effective participation of the parties in criminal proceedings and the court's obligation to administer substantive and procedural justice. This study tests the hypothesis that remote criminal proceedings do not in themselves violate the fundamental principles of criminal procedure; however, their implementation must ensure the right of the parties to effective legal remedies and a fair trial and, only then, must address the issues of procedural economy, speed and flexibility/functionality of the application of technologies. In order to ensure that changes in the regulation of criminal proceedings are in line with fair trial standards, this research will provide answers to the questions of what conditions -first of all, legal and only then organisational- are required for remote criminal proceedings to ensure respect for the parties and enable their effective participation in public proceedings, to create conditions for quality legal defence and its accessibility, to give a correct impression to the party that they are heard and that the court is impartial and fair. It also seeks to present the results of empirical research in the courts of Lithuania that was made by using the interview method. The research will serve as a basis for developing a theoretical model for remote criminal proceedings in the EU to ensure a balance between the intention to have innovative, cost-effective, and flexible criminal proceedings and the positive obligation of the State to ensure the rights of participants in proceedings to just and fair criminal proceedings. Moreover, developments in criminal proceedings also keep changing the image of the court itself; therefore, in the paper will create preconditions for future research on the impact of remote criminal proceedings on the trust in courts. The study aims at laying down the fundamentals for theoretical models of a remote hearing in criminal proceedings and at making recommendations for the safeguarding of human rights, in particular the rights of the accused, in such proceedings. The following criteria are relevant for the remote form of criminal proceedings: the purpose of judicial instance, the legal position of participants in proceedings, their vulnerability, and the nature of required legal protection. The content of the study consists of: 1. Identification of the factual and legal prerequisites for a decision to organise the entire criminal proceedings by remote means or to carry out one or several procedural actions by remote means 2. After analysing the legal regulation and practice concerning the application of the elements of remote criminal proceedings, distinguish the main legal safeguards for protection of the rights of the accused to ensure: (a) the right of effective participation in a court hearing; (b) the right of confidential consultation with the defence counsel; (c) the right of participation in the examination of evidence, in particular material evidence, as well as the right to question witnesses; and (d) the right to a public trial.Keywords: remote criminal proceedings, fair trial, right to defence, technology progress
Procedia PDF Downloads 73499 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado
Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha
Abstract:
The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.Keywords: low-order streams, metabolism, net primary production, trophic state
Procedia PDF Downloads 258498 A Question of Ethics and Faith
Authors: Madhavi-Priya Singh, Liam Lowe, Farouk Arnaout, Ludmilla Pillay, Giordan Perez, Luke Mischker, Steve Costa
Abstract:
An Emergency Department consultant identified the failure of medical students to complete the task of clerking a patient in its entirety. As six medical students on our first clinical placement, we recognised our own failure and endeavoured to examine why this failure was consistent among all medical students that had been given this task, despite our best motivations as adult learner. Our aim is to understand and investigate the elements which impeded our ability to learn and perform as medical students in the clinical environment, with reference to the prescribed task. We also aim to generate a discussion around the delivery of medical education with potential solutions to these barriers. Six medical students gathered together to have a comprehensive reflective discussion to identify possible factors leading to the failure of the task. First, we thoroughly analysed the delivery of the instructions with reference to the literature to identify potential flaws. We then examined personal, social, ethical, and cultural factors which may have impacted our ability to complete the task in its entirety. Through collation of our shared experiences, with support from discussion in the field of medical education and ethics, we identified two major areas that impacted our ability to complete the set task. First, we experienced an ethical conflict where we believed the inconvenience and potential harm inflicted on patients did not justify the positive impact the patient interaction would have on our medical learning. Second, we identified a lack of confidence stemming from multiple factors, including the conflict between preclinical and clinical learning, perceptions of perfectionism in the culture of medicine, and the influence of upward social comparison. After discussions, we found that the various factors we identified exacerbated the fears and doubts we already had about our own abilities and that of the medical education system. This doubt led us to avoid completing certain aspects of the tasks that were prescribed and further reinforced our vulnerability and perceived incompetence. Exploration of philosophical theories identified the importance of the role of doubt in education. We propose the need for further discussion around incorporating both pedagogic and andragogic teaching styles in clinical medical education and the acceptance of doubt as a driver of our learning. Doubt will continue to permeate our thoughts and actions no matter what. The moral or psychological distress that arises from this is the key motivating factor for our avoidance of tasks. If we accept this doubt and education embraces this doubt, it will no longer linger in the shadows as a negative and restrictive emotion but fuel a brighter dialogue and positive learning experience, ultimately assisting us in achieving our full potential.Keywords: medical education, clinical education, andragogy, pedagogy
Procedia PDF Downloads 129497 Counter-Terrorism Policies in the Wider Black Sea Region: Evaluating the Robustness of Constantza Port under Potential Terror Attacks
Authors: A. V. Popa, C. Barna, V. Mihalache
Abstract:
Being the largest port at the Black Sea and functioning as a civil and military nodal point between Europe and Asia, Constantza Port has become a potential target on the terrorist international agenda. The authors use qualitative research based on both face-to-face and online semi-structured interviews with relevant stakeholders (top decision-makers in the Romanian Naval Authority, Romanian Maritime Training Centre, National Company "Maritime Ports Administration" and military staff) in order to detect potential vulnerabilities which might be exploited by terrorists in the case of Constantza Port. Likewise, this will enable bringing together the experts’ opinions on potential mitigation measures. Subsequently, this paper formulates various counter-terrorism policies to enhance the robustness of Constantza Port under potential terror attacks and connects them with the attributions in the field of critical infrastructure protection conferred by the law to the lead national authority for preventing and countering terrorism, namely the Romanian Intelligence Service. Extending the national counterterrorism efforts to an international level, the authors propose the establishment – among the experts of the NATO member states of the Wider Black Sea Region – of a platform for the exchange of know-how and best practices in the field of critical infrastructure protection.Keywords: Constantza Port, counter-terrorism policies, critical infrastructure protection, security, Wider Black Sea Region
Procedia PDF Downloads 295496 Anomaly Detection in Financial Markets Using Tucker Decomposition
Authors: Salma Krafessi
Abstract:
The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models
Procedia PDF Downloads 70495 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO
Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky
Abstract:
The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.Keywords: aeronautics, big data, data processing, machine learning, S1000D
Procedia PDF Downloads 159494 Mediating Health in Rural Ghana: An Exploratory Study of AI-Driven Health Communications Channels and Media Reportage in Accra
Authors: Amos Ekow Coffie
Abstract:
This exploratory study investigates the impact of AI-driven health communications and media reportage on health outcomes in rural Ghana, focusing on rural communities within Accra. Despite the potential of AI-driven health communications in improving health outcomes, its adoption in rural Ghana is hindered by infrastructure challenges, digital literacy, and cultural factors. Media reportage plays a crucial role in shaping health perceptions and behaviors, but its impact is limited by inadequate health reporting, lack of specialized health journalists, and limited access to health information. This study aims to explore the integration of AI-driven health communications into media practices in rural Ghana, addressing the following research questions: How do AI-driven health communications impact health outcomes in rural Ghana? What role does media reportage play in shaping health perceptions and behaviors in Accra? How can AI-driven health communications and media reportage be optimized to improve health outcomes in rural Ghana? Using a mixed-methods approach, this study will combine surveys, interviews, and content analysis to investigate the impact of AI-driven Health Communication and media reportage on health outcomes in rural areas in Ghana. AI-driven health communications is the use of artificial intelligence (AI) technologies to design, deliver, and evaluate health messages, interventions, and campaigns. The study's findings will contribute to the development of effective health communication strategies, addressing the significant health disparities in rural areas in Ghana.Keywords: AI Driven Health Communication, Media Reporting, Rural Areas, Communication Channels
Procedia PDF Downloads 31493 Results of Longitudinal Assessments of Very Low Birth Weight and Extremely Low Birth Weight Infants
Authors: Anett Nagy, Anna Maria Beke, Rozsa Graf, Magda Kalmar
Abstract:
Premature birth involves developmental risks – the earlier the baby is born and the lower its birth weight, the higher the risks. The developmental outcomes for immature, low birth weight infants are hard to predict. Our aim is to identify the factors influencing infant and preschool-age development in very low birth weight (VLBW) and extremely low birth weight (ELBW) preterms. Sixty-one subjects participated in our longitudinal study, which consisted of thirty VLBW and thirty-one ELBW children. The psychomotor development of the infants was assessed using the Brunet-Lezine Developmental Scale at the corrected ages of one and two years; then at three years of age, they were tested with the WPPSI-IV IQ test. Birth weight, gestational age, perinatal complications, gender, and maternal education, were added to the data analysis as independent variables. According to our assessments, our subjects as a group scored in the average range in each subscale of the Brunet-Lezine Developmental Scale. The scores were the lowest in language at both measurement points. The children’s performances improved between one and two years of age, particularly in the domain of coordination. At three years of age the mean IQ test results, although still in the average range, were near the low end of it in each index. The ELBW preterms performed significantly poorer in Perceptual Reasoning Index. The developmental level at two years better predicted the IQ than that at one year. None of the measures distinguished the genders.Keywords: preterm, extremely low birth-weight, perinatal complication, psychomotor development, intelligence, follow-up
Procedia PDF Downloads 245492 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image
Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa
Abstract:
A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever
Procedia PDF Downloads 120491 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations
Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay
Abstract:
Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.Keywords: machining, milling operation, tool condition monitoring, tool wear prediction
Procedia PDF Downloads 303490 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study
Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker
Abstract:
In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning
Procedia PDF Downloads 143