Search results for: 3D Models
5792 A Grey-Box Text Attack Framework Using Explainable AI
Authors: Esther Chiramal, Kelvin Soh Boon Kai
Abstract:
Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.Keywords: BERT, explainable AI, Grey-box text attack, transformer
Procedia PDF Downloads 1385791 Evaluation Metrics for Machine Learning Techniques: A Comprehensive Review and Comparative Analysis of Performance Measurement Approaches
Authors: Seyed-Ali Sadegh-Zadeh, Kaveh Kavianpour, Hamed Atashbar, Elham Heidari, Saeed Shiry Ghidary, Amir M. Hajiyavand
Abstract:
Evaluation metrics play a critical role in assessing the performance of machine learning models. In this review paper, we provide a comprehensive overview of performance measurement approaches for machine learning models. For each category, we discuss the most widely used metrics, including their mathematical formulations and interpretation. Additionally, we provide a comparative analysis of performance measurement approaches for metric combinations. Our review paper aims to provide researchers and practitioners with a better understanding of performance measurement approaches and to aid in the selection of appropriate evaluation metrics for their specific applications.Keywords: evaluation metrics, performance measurement, supervised learning, unsupervised learning, reinforcement learning, model robustness and stability, comparative analysis
Procedia PDF Downloads 765790 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 1405789 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores
Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay
Abstract:
Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.Keywords: retail stores, faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition
Procedia PDF Downloads 1575788 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time
Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar
Abstract:
The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors
Procedia PDF Downloads 745787 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 3225786 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1065785 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 365784 Equilibrium and Kinetic Studies of Lead Adsorption on Activated Carbon Derived from Mangrove Propagule Waste by Phosphoric Acid Activation
Authors: Widi Astuti, Rizki Agus Hermawan, Hariono Mukti, Nurul Retno Sugiyono
Abstract:
The removal of lead ion (Pb2+) from aqueous solution by activated carbon with phosphoric acid activation employing mangrove propagule as precursor was investigated in a batch adsorption system. Batch studies were carried out to address various experimental parameters including pH and contact time. The Langmuir and Freundlich models were able to describe the adsorption equilibrium, while the pseudo first order and pseudo second order models were used to describe kinetic process of Pb2+ adsorption. The results show that the adsorption data are seen in accordance with Langmuir isotherm model and pseudo-second order kinetic model.Keywords: activated carbon, adsorption, equilibrium, kinetic, lead, mangrove propagule
Procedia PDF Downloads 1675783 Housing Delivery in Nigeria: Repackaging for Sustainable Development
Authors: Funmilayo L. Amao, Amos O. Amao
Abstract:
It has been observed that majority of the people are living in poor housing quality or totally homeless in urban center despite all governmental policies to provide housing to the public. On the supply side, various government policies in the past have been formulated towards overcoming the huge shortage through several Housing Reform Programmes. Despite these past efforts, housing continues to be a mirage to ordinary Nigerian. Currently, there are various mass housing delivery programmes such as the affordable housing scheme that utilize the Public Private Partnership effort and several Private Finance Initiative models could only provide for about 3% of the required stock. This suggests the need for a holistic solution in approaching the problem. The aim of this research is to find out the problems hindering the delivery of housing in Nigeria and its effects on housing affordability. The specific objectives are to identify the causes of housing delivery problems, to examine different housing policies over years and to suggest a way out for sustainable housing delivery. This paper also reviews the past and current housing delivery programmes in Nigeria and analyses the demand and supply side issues. It identifies the various housing delivery mechanisms in current practice. The objective of this paper, therefore, is to give you an insight into the delivery option for the sustainability of housing in Nigeria, given the existing delivery structures and the framework specified in the New National Housing Policy. The secondary data were obtained from books, journals and seminar papers. The conclusion is that we cannot copy models from other nations, but should rather evolve workable models based on our socio-cultural background to address the huge housing shortage in Nigeria. Recommendations are made in this regard.Keywords: housing, sustainability, housing delivery, housing policy, housing affordability
Procedia PDF Downloads 2975782 Implementation of Lean Production in Business Enterprises: A Literature-Based Content Analysis of Implementation Procedures
Authors: P. Pötters, A. Marquet, B. Leyendecker
Abstract:
The objective of this paper is to investigate different implementation approaches for the implementation of Lean production in companies. Furthermore, a structured overview of those different approaches is to be made. Therefore, the present work is intended to answer the following research question: What differences and similarities exist between the various systematic approaches and phase models for the implementation of Lean Production? To present various approaches for the implementation of Lean Production discussed in the literature, a qualitative content analysis was conducted. Within the framework of a qualitative survey, a selection of texts dealing with lean production and its introduction was examined. The analysis presents different implementation approaches from the literature, covering the descriptive aspect of the study. The study also provides insights into similarities and differences among the implementation approaches, which are drawn from the analysis of latent text contents and author interpretations. In this study, the focus is on identifying differences and similarities among systemic approaches for implementing Lean Production. The research question takes into account the main object of consideration, objectives pursued, starting point, procedure, and endpoint of the implementation approach. The study defines the concept of Lean Production and presents various approaches described in literature that companies can use to implement Lean Production successfully. The study distinguishes between five systemic implementation approaches and seven phase models to help companies choose the most suitable approach for their implementation project. The findings of this study can contribute to enhancing transparency regarding the existing approaches for implementing Lean Production. This can enable companies to compare and contrast the available implementation approaches and choose the most suitable one for their specific project.Keywords: implementation, lean production, phase models, systematic approaches
Procedia PDF Downloads 1045781 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges
Authors: Dianelys Vega, Carlos Magluta, Ney Roitman
Abstract:
The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction
Procedia PDF Downloads 1325780 Research on Residential Block Fabric: A Case Study of Hangzhou West Area
Abstract:
Residential block construction of big cities in China began in the 1950s, and four models had far-reaching influence on modern residential block in its development process, including unit compound and residential district in 1950s to 1980s, and gated community and open community in 1990s to now. Based on analysis of the four models’ fabric, the article takes residential blocks in Hangzhou west area as an example and carries on the studies from urban structure level and block special level, mainly including urban road network, land use, community function, road organization, public space and building fabric. At last, the article puts forward semi-open sub-community strategy to improve the current fabric.Keywords: Hangzhou west area, residential block model, residential block fabric, semi-open sub-community strategy
Procedia PDF Downloads 4195779 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 475778 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 1425777 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 1355776 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study
Authors: Kasim Görenekli, Ali Gülbağ
Abstract:
This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management
Procedia PDF Downloads 185775 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.Keywords: anti-spoofing, CNN, fingerprint recognition, GAN
Procedia PDF Downloads 1845774 Towards the Reverse Engineering of UML Sequence Diagrams Using Petri Nets
Authors: C. Baidada, M. H. Abidi, A. Jakimi, E. H. El Kinani
Abstract:
Reverse engineering has become a viable method to measure an existing system and reconstruct the necessary model from tis original. The reverse engineering of behavioral models consists in extracting high-level models that help understand the behavior of existing software systems. In this paper, we propose an approach for the reverse engineering of sequence diagrams from the analysis of execution traces produced dynamically by an object-oriented application using petri nets. Our methods show that this approach can produce state diagrams in reasonable time and suggest that these diagrams are helpful in understanding the behavior of the underlying application. Finally we will discuss approachs and tools that are needed in the process of reverse engineering UML behavior. This work is a substantial step towards providing high-quality methodology for effectiveand efficient reverse engineering of sequence diagram.Keywords: reverse engineering, UML behavior, sequence diagram, execution traces, petri nets
Procedia PDF Downloads 4465773 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 745772 A Control Model for the Dismantling of Industrial Plants
Authors: Florian Mach, Eric Hund, Malte Stonis
Abstract:
The dismantling of disused industrial facilities such as nuclear power plants or refineries is an enormous challenge for the planning and control of the logistic processes. Existing control models do not meet the requirements for a proper dismantling of industrial plants. Therefore, the paper presents an approach for the control of dismantling and post-processing processes (e.g. decontamination) in plant decommissioning. In contrast to existing approaches, the dismantling sequence and depth are selected depending on the capacity utilization of required post-processing processes by also considering individual characteristics of respective dismantling tasks (e.g. decontamination success rate, uncertainties regarding the process times). The results can be used in the dismantling of industrial plants (e.g. nuclear power plants) to reduce dismantling time and costs by avoiding bottlenecks such as capacity constraints.Keywords: dismantling management, logistics planning and control models, nuclear power plant dismantling, reverse logistics
Procedia PDF Downloads 3055771 Drying Characteristics of Shrimp by Using the Traditional Method of Oven
Authors: I. A. Simsek, S. N. Dogan, A. S. Kipcak, E. Morodor Derun, N. Tugrul
Abstract:
In this study, the drying characteristics of shrimp are studied by using the traditional drying method of oven. Drying temperatures are selected between 60-80°C. Obtained experimental drying results are applied to eleven mathematical models of Alibas, Aghbashlo et al., Henderson and Pabis, Jena and Das, Lewis, Logaritmic, Midilli and Kucuk, Page, Parabolic, Wang and Singh and Weibull. The best model was selected as parabolic based on the highest coefficient of determination (R²) (0.999990 at 80°C) and the lowest χ² (0.000002 at 80°C), and the lowest root mean square error (RMSE) (0.000976 at 80°C) values are compared to other models. The effective moisture diffusivity (Deff) values were calculated using the Fick’s second law’s cylindrical coordinate approximation and are found between 6.61×10⁻⁸ and 6.66×10⁻⁷ m²/s. The activation energy (Ea) was calculated using modified form of Arrhenius equation and is found as 18.315 kW/kg.Keywords: activation energy, drying, effective moisture diffusivity, modelling, oven, shrimp
Procedia PDF Downloads 1925770 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon
Authors: Laura M. F. Bertens
Abstract:
There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone
Procedia PDF Downloads 1285769 Energy Use and Econometric Models of Soybean Production in Mazandaran Province of Iran
Authors: Majid AghaAlikhani, Mostafa Hojati, Saeid Satari-Yuzbashkandi
Abstract:
This paper studies energy use patterns and relationship between energy input and yield for soybean (Glycine max (L.) Merrill) in Mazandaran province of Iran. In this study, data were collected by administering a questionnaire in face-to-face interviews. Results revealed that the highest share of energy consumption belongs to chemical fertilizers (29.29%) followed by diesel (23.42%) and electricity (22.80%). Our investigations showed that a total energy input of 23404.1 MJ.ha-1 was consumed for soybean production. The energy productivity, specific energy, and net energy values were estimated as 0.12 kg MJ-1, 8.03 MJ kg-1, and 49412.71 MJ.ha-1, respectively. The ratio of energy outputs to energy inputs was 3.11. Obtained results indicated that direct, indirect, renewable and non-renewable energies were (56.83%), (43.17%), (15.78%) and (84.22%), respectively. Three econometric models were also developed to estimate the impact of energy inputs on yield. The results of econometric models revealed that impact of chemical, fertilizer, and water on yield were significant at 1% probability level. Also, direct and non-renewable energies were found to be rather high. Cost analysis revealed that total cost of soybean production per ha was around 518.43$. Accordingly, the benefit-cost ratio was estimated as 2.58. The energy use efficiency in soybean production was found as 3.11. This reveals that the inputs used in soybean production are used efficiently. However, due to higher rate of nitrogen fertilizer consumption, sustainable agriculture should be extended and extension staff could be proposed substitution of chemical fertilizer by biological fertilizer or green manure.Keywords: Cobbe Douglas function, economical analysis, energy efficiency, energy use patterns, soybean
Procedia PDF Downloads 3355768 Performance of Fiber-Reinforced Polymer as an Alternative Reinforcement
Authors: Salah E. El-Metwally, Marwan Abdo, Basem Abdel Wahed
Abstract:
Fiber-reinforced polymer (FRP) bars have been proposed as an alternative to conventional steel bars; hence, the use of these non-corrosive and nonmetallic reinforcing bars has increased in various concrete projects. This concrete material is lightweight, has a long lifespan, and needs minor maintenance; however, its non-ductile nature and weak bond with the surrounding concrete create a significant challenge. The behavior of concrete elements reinforced with FRP bars has been the subject of several experimental investigations, even with their high cost. This study aims to numerically assess the viability of using FRP bars, as longitudinal reinforcement, in comparison with traditional steel bars, and also as prestressing tendons instead of the traditional prestressing steel. The nonlinear finite element analysis has been utilized to carry out the current study. Numerical models have been developed to examine the behavior of concrete beams reinforced with FRP bars or tendons against similar models reinforced with either conventional steel or prestressing steel. These numerical models were verified by experimental test results available in the literature. The obtained results revealed that concrete beams reinforced with FRP bars, as passive reinforcement, exhibited less ductility and less stiffness than similar beams reinforced with steel bars. On the other hand, when FRP tendons are employed in prestressing concrete beams, the results show that the performance of these beams is similar to those beams prestressed by conventional active reinforcement but with a difference caused by the two tendon materials’ moduli of elasticity.Keywords: reinforced concrete, prestressed concrete, nonlinear finite element analysis, fiber-reinforced polymer, ductility
Procedia PDF Downloads 175767 Annual Water Level Simulation Using Support Vector Machine
Authors: Maryam Khalilzadeh Poshtegal, Seyed Ahmad Mirbagheri, Mojtaba Noury
Abstract:
In this paper, by application of the input yearly data of rainfall, temperature and flow to the Urmia Lake, the simulation of water level fluctuation were applied by means of three models. According to the climate change investigation the fluctuation of lakes water level are of high interest. This study investigate data-driven models, support vector machines (SVM), SVM method which is a new regression procedure in water resources are applied to the yearly level data of Lake Urmia that is the biggest and the hyper saline lake in Iran. The evaluated lake levels are found to be in good correlation with the observed values. The results of SVM simulation show better accuracy and implementation. The mean square errors, mean absolute relative errors and determination coefficient statistics are used as comparison criteria.Keywords: simulation, water level fluctuation, urmia lake, support vector machine
Procedia PDF Downloads 3685766 A Critical Discourse Analysis of Jamaican and Trinidadian News Articles about D/Deafness
Authors: Melissa Angus Baboun
Abstract:
Utilizing a Critical Discourse Analysis (CDA) methodology and a theoretical framework based on disability studies, how Jamaican and Trinidadian newspapers discussed issues relating to the Deaf community were examined. The term deaf was inputted into the search engine tool of the online website for the Jamaica Observer and the Trinidad & Tobago Guardian. All 27 articles that contained the term deaf in its content and were written between August 1, 2017 and November 15, 2017 were chosen for the study. The data analysis was divided into three steps: (1) listing and analysis instances of metaphorical deafness (e.g. fall on deaf ears), (2) categorization of the content of the articles into the models of disability discourse (the medical, socio-cultural, and superscrip models of disability narratives), and (3) the analysis of any additional data found. A total of 42% of the articles pulled for this study did not deal with the Deaf community in any capacity, but rather instances of the use of idiomatic expressions that use deafness as a metaphor for a non-physical, undesirable trait. The most common idiomatic expression found was fall on deaf ears. Regarding the models of disability discourse, eight articles were found to follow the socio-cultural model, two were found to follow the medical model, and two were found to follow the superscrip model. The additional data found in these articles include two instances of the term deaf and mute, an overwhelming use of lower case d for the term deaf, and the misuse of the term translator (to mean interpreter).Keywords: deafness, disability, news coverage, Caribbean newspapers
Procedia PDF Downloads 2335765 LACGC: Business Sustainability Research Model for Generations Consumption, Creation, and Implementation of Knowledge: Academic and Non-Academic
Authors: Satpreet Singh
Abstract:
This paper introduces the new LACGC model to sustain the academic and non-academic business to future educational and organizational generations. The consumption of knowledge and the creation of new knowledge is a strength and focal interest of all academics and Non-academic organizations. Implementing newly created knowledge sustains the businesses to the next generation with growth without detriment. Existing models like the Scholar-practitioner model and Organization knowledge creation models focus specifically on academic or non-academic, not both. LACGC model can be used for both Academic and Non-academic at the domestic or international level. Researchers and scholars play a substantial role in finding literature and practice gaps in academic and non-academic disciplines. LACGC model has unrestricted the number of recurrences because the Consumption, Creation, and implementation of new ideas, disciplines, systems, and knowledge is a never-ending process and must continue from one generation to the next.Keywords: academics, consumption, creation, generations, non-academics, research, sustainability
Procedia PDF Downloads 1975764 Soap Film Enneper Minimal Surface Model
Authors: Yee Hooi Min, Mohdnasir Abdul Hadi
Abstract:
Tensioned membrane structure in the form of Enneper minimal surface can be considered as a sustainable development for the green environment and technology, it also can be used to support the effectiveness used of energy and the structure. Soap film in the form of Enneper minimal surface model has been studied. The combination of shape and internal forces for the purpose of stiffness and strength is an important feature of membrane surface. For this purpose, form-finding using soap film model has been carried out for Enneper minimal surface models with variables u=v=0.6 and u=v=1.0. Enneper soap film models with variables u=v=0.6 and u=v=1.0 provides an alternative choice for structural engineers to consider the tensioned membrane structure in the form of Enneper minimal surface applied in the building industry. It is expected to become an alternative building material to be considered by the designer.Keywords: Enneper, minimal surface, soap film, tensioned membrane structure
Procedia PDF Downloads 5555763 AI-Powered Models for Real-Time Fraud Detection in Financial Transactions to Improve Financial Security
Authors: Shanshan Zhu, Mohammad Nasim
Abstract:
Financial fraud continues to be a major threat to financial institutions across the world, causing colossal money losses and undermining public trust. Fraud prevention techniques, based on hard rules, have become ineffective due to evolving patterns of fraud in recent times. Against such a background, the present study probes into distinct methodologies that exploit emergent AI-driven techniques to further strengthen fraud detection. We would like to compare the performance of generative adversarial networks and graph neural networks with other popular techniques, like gradient boosting, random forests, and neural networks. To this end, we would recommend integrating all these state-of-the-art models into one robust, flexible, and smart system for real-time anomaly and fraud detection. To overcome the challenge, we designed synthetic data and then conducted pattern recognition and unsupervised and supervised learning analyses on the transaction data to identify which activities were fishy. With the use of actual financial statistics, we compare the performance of our model in accuracy, speed, and adaptability versus conventional models. The results of this study illustrate a strong signal and need to integrate state-of-the-art, AI-driven fraud detection solutions into frameworks that are highly relevant to the financial domain. It alerts one to the great urgency that banks and related financial institutions must rapidly implement these most advanced technologies to continue to have a high level of security.Keywords: AI-driven fraud detection, financial security, machine learning, anomaly detection, real-time fraud detection
Procedia PDF Downloads 44