Search results for: deep convolution networks
3925 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 3553924 Analysis of Facial Expressions with Amazon Rekognition
Authors: Kashika P. H.
Abstract:
The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty.Keywords: Amazon rekognition, API, deep learning, computer vision, face detection, text detection
Procedia PDF Downloads 1073923 Collaboration in Palliative Care Networks in Urban and Rural Regions of Switzerland
Authors: R. Schweighoffer, N. Nagy, E. Reeves, B. Liebig
Abstract:
Due to aging populations, the need for seamless palliative care provision is of central interest for western societies. An essential aspect of palliative care delivery is the quality of collaboration amongst palliative care providers. Therefore, the current research is based on Bainbridge’s conceptual framework, which provides an outline for the evaluation of palliative care provision. This study is the first one to investigate the predictive validity of spatial distribution on the quantity of interaction amongst various palliative care providers. Furthermore, based on the familiarity principle, we examine whether the extent of collaboration influences the perceived quality of collaboration among palliative care providers in urban versus rural areas of Switzerland. Based on a population-representative survey of Swiss palliative care providers, the results of the current study show that professionals in densely populated areas report higher absolute numbers of interactions and are more satisfied with their collaborative practice. This indicates that palliative care providers who work in urban areas are better embedded into networks than their counterparts in more rural areas. The findings are especially important, considering that efficient collaboration is a prerequisite to achieve satisfactory patient outcomes. Conclusively, measures should be taken to foster collaboration in weakly interconnected palliative care networks.Keywords: collaboration, healthcare networks, palliative care, Switzerland
Procedia PDF Downloads 2703922 The Rigor and Relevance of the Mathematics Component of the Teacher Education Programmes in Jamaica: An Evaluative Approach
Authors: Avalloy McCarthy-Curvin
Abstract:
For over fifty years there has been widespread dissatisfaction with the teaching of Mathematics in Jamaica. Studies, done in the Jamaican context highlight that teachers at the end of training do not have a deep understanding of the mathematics content they teach. Little research has been done in the Jamaican context that targets the advancement of contextual knowledge on the problem to ultimately provide a solution. The aim of the study is to identify what influences this outcome of teacher education in Jamaica so as to remedy the problem. This study formatively evaluated the curriculum documents, assessments and the delivery of the curriculum that are being used in teacher training institutions in Jamaica to determine their rigor -the extent to which written document, instruction, and the assessments focused on enabling pre-service teachers to develop deep understanding of mathematics and relevance- the extent to which the curriculum document, instruction, and the assessments are focus on developing the requisite knowledge for teaching mathematics. The findings show that neither the curriculum document, instruction nor assessments ensure rigor and enable pre-service teachers to develop the knowledge and skills they need to teach mathematics effectively.Keywords: relevance, rigor, deep understanding, formative evaluation
Procedia PDF Downloads 2403921 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 1483920 A Deep Learning Approach to Detect Complete Safety Equipment for Construction Workers Based on YOLOv7
Authors: Shariful Islam, Sharun Akter Khushbu, S. M. Shaqib, Shahriar Sultan Ramit
Abstract:
In the construction sector, ensuring worker safety is of the utmost significance. In this study, a deep learning-based technique is presented for identifying safety gear worn by construction workers, such as helmets, goggles, jackets, gloves, and footwear. The suggested method precisely locates these safety items by using the YOLO v7 (You Only Look Once) object detection algorithm. The dataset utilized in this work consists of labeled images split into training, testing and validation sets. Each image has bounding box labels that indicate where the safety equipment is located within the image. The model is trained to identify and categorize the safety equipment based on the labeled dataset through an iterative training approach. We used custom dataset to train this model. Our trained model performed admirably well, with good precision, recall, and F1-score for safety equipment recognition. Also, the model's evaluation produced encouraging results, with a [email protected] score of 87.7%. The model performs effectively, making it possible to quickly identify safety equipment violations on building sites. A thorough evaluation of the outcomes reveals the model's advantages and points up potential areas for development. By offering an automatic and trustworthy method for safety equipment detection, this research contributes to the fields of computer vision and workplace safety. The proposed deep learning-based approach will increase safety compliance and reduce the risk of accidents in the construction industry.Keywords: deep learning, safety equipment detection, YOLOv7, computer vision, workplace safety
Procedia PDF Downloads 683919 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line
Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez
Abstract:
Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.Keywords: deep-learning, image classification, image identification, industrial engineering.
Procedia PDF Downloads 1623918 Some Results on Cluster Synchronization
Authors: Shahed Vahedi, Mohd Salmi Md Noorani
Abstract:
This paper investigates cluster synchronization phenomena between community networks. We focus on the situation where a variety of dynamics occur in the clusters. In particular, we show that different synchronization states simultaneously occur between the networks. The controller is designed having an adaptive control gain, and theoretical results are derived via Lyapunov stability. Simulations on well-known dynamical systems are provided to elucidate our results.Keywords: cluster synchronization, adaptive control, community network, simulation
Procedia PDF Downloads 4783917 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach
Authors: Nada Souissi, Mourad Mroua
Abstract:
The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning
Procedia PDF Downloads 1513916 Preparation of 1D Nano-Polyaniline/Dendritic Silver Composites
Authors: Wen-Bin Liau, Wan-Ting Wang, Chiang-Jen Hsiao, Sheng-Mao Tseng
Abstract:
In this paper, an interesting and easy method to prepare one-dimensional nanostructured polyaniline/dendritic silver composites is reported. It is well known that the morphology of metal particle is a very important factor to influence the properties of polymer-metal composites. Usually, the dendritic silver is prepared by kinetic control in reduction reaction. It is not a thermodynamically stable structure. It is the goal to reduce silver ion to dendritic silver by polyaniline polymer via kinetic control and form one-dimensional nanostructured polyaniline/dendritic silver composites. The preparation is a two steps sequential reaction. First step, the polyaniline networks composed of nano fibrillar polyaniline are synthesized from aniline monomers aqueous with ammonium persulfate as the initiator at room temperature. In second step, the silver nitrate is added into polyaniline networks dispersed in deionized water. The dendritic silver is formed via reduction by polyaniline networks under the kinetic control. The formation of polyaniline is discussed via transmission electron microscopy (TEM). Nanosheets, nanotubes, nanospheres, nanosticks, and networks are observed via TEM. Then, the mechanism of formation of one-dimensional nanostructured polyaniline/dendritic silver composites is discussed. The formation of dendritic silver is observed by TEM and X-ray diffraction.Keywords: 1D nanostructured polyaniline, dendritic silver, synthesis
Procedia PDF Downloads 5013915 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1323914 Strengthening Farmer-to-farmer Knowledge Sharing Network: A Pathway to Improved Extension Service Delivery
Authors: Farouk Shehu Abdulwahab
Abstract:
The concept of farmer-farmer knowledge sharing was introduced to bridge the extension worker-farmer ratio gap in developing countries. However, the idea was poorly accepted, especially in typical agrarian communities. Therefore, the study explores the concept of a farmer-to-farmer knowledge-sharing network to enhance extension service delivery. The study collected data from 80 farmers randomly selected through a series of multiple stages. The Data was analysed using a 5-point Likert scale and descriptive statistics. The Likert scale results revealed that 62.5% of the farmers are satisfied with farmer-to-farmer knowledge-sharing networks. Moreover, descriptive statistics show that lack of capacity building and low level of education are the most significant problems affecting farmer-farmer sharing networks. The major implication of these findings is that the concept of farmer-farmer knowledge-sharing networks can work better for farmers in developing countries as it was perceived by them as a reliable alternative for information sharing. Therefore, the study recommends introducing incentives into the concept of farmer-farmer knowledge-sharing networks and enhancing the capabilities of farmers who are opinion leaders in the farmer-farmer concept of knowledge-sharing to make it more sustainable.Keywords: agricultural productivity, extension, farmer-to-farmer, livelihood, technology transfer
Procedia PDF Downloads 683913 Local Food Movements and Community Building in Turkey
Authors: Derya Nizam
Abstract:
An alternative understanding of "localization" has gained significance as the ecological and social issues associated with the growing pressure of agricultural homogeneity and standardization become more apparent. Through an analysis of a case study on an alternative food networks in Turkey, this research seeks to critically examine the localization movement. The results indicate that the idea of localization helps to create new niche markets by creating place-based labels, but it also strengthens local identities through social networks that connect rural and urban areas. In that context, localization manifests as a commodification movement that appropriates local and cultural values to generate capitalist profit, as well as a grassroots movement that strengthens the resilience of local communities. This research addresses the potential of community development approaches in the democratization of global agro-food networks.Keywords: community building, local food, alternative food movements, localization
Procedia PDF Downloads 813912 One-Step Time Series Predictions with Recurrent Neural Networks
Authors: Vaidehi Iyer, Konstantin Borozdin
Abstract:
Time series prediction problems have many important practical applications, but are notoriously difficult for statistical modeling. Recently, machine learning methods have been attracted significant interest as a practical tool applied to a variety of problems, even though developments in this field tend to be semi-empirical. This paper explores application of Long Short Term Memory based Recurrent Neural Networks to the one-step prediction of time series for both trend and stochastic components. Two types of data are analyzed - daily stock prices, that are often considered to be a typical example of a random walk, - and weather patterns dominated by seasonal variations. Results from both analyses are compared, and reinforced learning framework is used to select more efficient between Recurrent Neural Networks and more traditional auto regression methods. It is shown that both methods are able to follow long-term trends and seasonal variations closely, but have difficulties with reproducing day-to-day variability. Future research directions and potential real world applications are briefly discussed.Keywords: long short term memory, prediction methods, recurrent neural networks, reinforcement learning
Procedia PDF Downloads 2333911 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 2783910 Probabilistic Approach to Contrast Theoretical Predictions from a Public Corruption Game Using Bayesian Networks
Authors: Jaime E. Fernandez, Pablo J. Valverde
Abstract:
This paper presents a methodological approach that aims to contrast/validate theoretical results from a corruption network game through probabilistic analysis of simulated microdata using Bayesian Networks (BNs). The research develops a public corruption model in a game theory framework. Theoretical results suggest a series of 'optimal settings' of model's exogenous parameters that boost the emergence of corruption. The paper contrasts these outcomes with probabilistic inference results based on BNs adjusted over simulated microdata. Principal findings indicate that probabilistic reasoning based on BNs significantly improves parameter specification and causal analysis in a public corruption game.Keywords: Bayesian networks, probabilistic reasoning, public corruption, theoretical games
Procedia PDF Downloads 2113909 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks
Authors: Ameen Jameel Alawneh
Abstract:
A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets
Procedia PDF Downloads 3963908 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance
Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan
Abstract:
A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection
Procedia PDF Downloads 1263907 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems
Authors: Semih Demir, Anil Celebi
Abstract:
Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization
Procedia PDF Downloads 2793906 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 3593905 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 723904 Artificial Neural Networks with Decision Trees for Diagnosis Issues
Authors: Y. Kourd, D. Lefebvre, N. Guersi
Abstract:
This paper presents a new idea for fault detection and isolation (FDI) technique which is applied to industrial system. This technique is based on Neural Networks fault-free and Faulty behaviors Models (NNFM's). NNFM's are used for residual generation, while decision tree architecture is used for residual evaluation. The decision tree is realized with data collected from the NNFM’s outputs and is used to isolate detectable faults depending on computed threshold. Each part of the tree corresponds to specific residual. With the decision tree, it becomes possible to take the appropriate decision regarding the actual process behavior by evaluating few numbers of residuals. In comparison to usual systematic evaluation of all residuals, the proposed technique requires less computational effort and can be used for on line diagnosis. An application example is presented to illustrate and confirm the effectiveness and the accuracy of the proposed approach.Keywords: neural networks, decision trees, diagnosis, behaviors
Procedia PDF Downloads 5083903 Sustainable Design of Coastal Bridge Networks in the Presence of Multiple Flood and Earthquake Risks
Authors: Riyadh Alsultani, Ali Majdi
Abstract:
It is necessary to develop a design methodology that includes the possibility of seismic events occurring in a region, the vulnerability of the civil hydraulic structure, and the effects of the occurrence hazard on society, environment, and economy in order to evaluate the flood and earthquake risks of coastal bridge networks. This paper presents a design approach for the assessment of the risk and sustainability of coastal bridge networks under time-variant flood-earthquake conditions. The social, environmental, and economic indicators of the network are used to measure its sustainability. These consist of anticipated loss, downtime, energy waste, and carbon dioxide emissions. The design process takes into account the possibility of happening in a set of flood and earthquake scenarios that represent the local seismic activity. Based on the performance of each bridge as determined by fragility assessments, network linkages are measured. The network's connections and bridges' damage statuses after an earthquake scenario determine the network's sustainability and danger. The sustainability measures' temporal volatility and the danger of structural degradation are both highlighted. The method is shown using a transportation network in Baghdad, Iraq.Keywords: sustainability, Coastal bridge networks, flood-earthquake risk, structural design
Procedia PDF Downloads 963902 Integrating Wound Location Data with Deep Learning for Improved Wound Classification
Authors: Mouli Banga, Chaya Ravindra
Abstract:
Wound classification is a crucial step in wound diagnosis. An effective classifier can aid wound specialists in identifying wound types with reduced financial and time investments, facilitating the determination of optimal treatment procedures. This study presents a deep neural network-based classifier that leverages wound images and their corresponding locations to categorize wounds into various classes, such as diabetic, pressure, surgical, and venous ulcers. By incorporating a developed body map, the process of tagging wound locations is significantly enhanced, providing healthcare specialists with a more efficient tool for wound analysis. We conducted a comparative analysis between two prominent convolutional neural network models, ResNet50 and MobileNetV2, utilizing a dataset of 730 images. Our findings reveal that the RestNet50 outperforms MovileNetV2, achieving an accuracy of approximately 90%, compared to MobileNetV2’s 83%. This disparity highlights the superior capability of ResNet50 in the context of this dataset. The results underscore the potential of integrating deep learning with spatial data to improve the precision and efficiency of wound diagnosis, ultimately contributing to better patient outcomes and reducing healthcare costs.Keywords: wound classification, MobileNetV2, ResNet50, multimodel
Procedia PDF Downloads 353901 Prediction of the Crustal Deformation of Volcán - Nevado Del RUíz in the Year 2020 Using Tropomi Tropospheric Information, Dinsar Technique, and Neural Networks
Authors: Juan Sebastián Hernández
Abstract:
The Nevado del Ruíz volcano, located between the limits of the Departments of Caldas and Tolima in Colombia, presented an unstable behaviour in the course of the year 2020, this volcanic activity led to secondary effects on the crust, which is why the prediction of deformations becomes the task of geoscientists. In the course of this article, the use of tropospheric variables such as evapotranspiration, UV aerosol index, carbon monoxide, nitrogen dioxide, methane, surface temperature, among others, is used to train a set of neural networks that can predict the behaviour of the resulting phase of an unrolled interferogram with the DInSAR technique, whose main objective is to identify and characterise the behaviour of the crust based on the environmental conditions. For this purpose, variables were collected, a generalised linear model was created, and a set of neural networks was created. After the training of the network, validation was carried out with the test data, giving an MSE of 0.17598 and an associated r-squared of approximately 0.88454. The resulting model provided a dataset with good thematic accuracy, reflecting the behaviour of the volcano in 2020, given a set of environmental characteristics.Keywords: crustal deformation, Tropomi, neural networks (ANN), volcanic activity, DInSAR
Procedia PDF Downloads 1043900 Xenografts: Successful Penetrating Keratoplasty Between Two Species
Authors: Francisco Alvarado, Luz Ramírez
Abstract:
Corneal diseases are one of the main causes of visual impairment and affect almost 4 million, and this study assesses the effects of deep anterior lamellar keratoplasty (DALK) with porcine corneal stroma and postoperative topical treatment with tacrolimus in patients with infectious keratitis. No patient was observed with clinical graft rejection. Among the cases: 2 were positive to fungal culture, 2 with Aspergillus and the other 8 cases were confirmed by bacteriological culture. Corneal diseases are one of the main causes of visual impairment and affect almost 4 million. This study assesses the effects of deep anterior lamellar keratoplasty (DALK) with porcine corneal stroma and postoperative topical treatment with tacrolimus in patients with infectious keratitis. Receiver bed diameters ranged from 7.00 to 9.00 mm. No incidents of Descemet's membrane perforation were observed during surgery. During the follow-up period, no corneal graft splitting, IOP increase, or intolerance to tacrolimus were observed. Deep anterior lamellar keratoplasty seems to be the best option to avoid xenograft rejection, and it could help new surgical techniques in humans.Keywords: ophthalmology, cornea, corneal transplant, xenografts, surgical innovations
Procedia PDF Downloads 853899 A Deep Learning Approach to Calculate Cardiothoracic Ratio From Chest Radiographs
Authors: Pranav Ajmera, Amit Kharat, Tanveer Gupte, Richa Pant, Viraj Kulkarni, Vinay Duddalwar, Purnachandra Lamghare
Abstract:
The cardiothoracic ratio (CTR) is the ratio of the diameter of the heart to the diameter of the thorax. An abnormal CTR, that is, a value greater than 0.55, is often an indicator of an underlying pathological condition. The accurate prediction of an abnormal CTR from chest X-rays (CXRs) aids in the early diagnosis of clinical conditions. We propose a deep learning-based model for automatic CTR calculation that can assist the radiologist with the diagnosis of cardiomegaly and optimize the radiology flow. The study population included 1012 posteroanterior (PA) CXRs from a single institution. The Attention U-Net deep learning (DL) architecture was used for the automatic calculation of CTR. A CTR of 0.55 was used as a cut-off to categorize the condition as cardiomegaly present or absent. An observer performance test was conducted to assess the radiologist's performance in diagnosing cardiomegaly with and without artificial intelligence (AI) assistance. The Attention U-Net model was highly specific in calculating the CTR. The model exhibited a sensitivity of 0.80 [95% CI: 0.75, 0.85], precision of 0.99 [95% CI: 0.98, 1], and a F1 score of 0.88 [95% CI: 0.85, 0.91]. During the analysis, we observed that 51 out of 1012 samples were misclassified by the model when compared to annotations made by the expert radiologist. We further observed that the sensitivity of the reviewing radiologist in identifying cardiomegaly increased from 40.50% to 88.4% when aided by the AI-generated CTR. Our segmentation-based AI model demonstrated high specificity and sensitivity for CTR calculation. The performance of the radiologist on the observer performance test improved significantly with AI assistance. A DL-based segmentation model for rapid quantification of CTR can therefore have significant potential to be used in clinical workflows.Keywords: cardiomegaly, deep learning, chest radiograph, artificial intelligence, cardiothoracic ratio
Procedia PDF Downloads 1003898 The Use of Random Set Method in Reliability Analysis of Deep Excavations
Authors: Arefeh Arabaninezhad, Ali Fakher
Abstract:
Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty
Procedia PDF Downloads 2683897 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings
Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi
Abstract:
Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden
Procedia PDF Downloads 863896 Relationship of Arm Acupressure Points and Thai Traditional Massage
Authors: Boonyarat Chaleephay
Abstract:
The purpose of this research paper was to describe the relationship of acupressure points on the anterior surface of the upper limb in accordance with Applied Thai Traditional Massage (ATTM) and the deep structures located at those acupressure points. There were 2 population groups; normal subjects and cadaver specimens. Eighteen males with age ranging from 20-40 years old and seventeen females with ages ranging from 30-97 years old were studies. This study was able to obtain a fundamental knowledge concerning acupressure point and the deep structures that related to those acupressure points. It might be used as the basic knowledge for clinically applying and planning treatment as well as teaching in ATTM.Keywords: acupressure point (AP), applie Thai traditional medicine (ATTM), paresthesia, numbness
Procedia PDF Downloads 240