Search results for: back propagation neural network model
19670 A Comparative Study on Deep Learning Models for Pneumonia Detection
Authors: Hichem Sassi
Abstract:
Pneumonia, being a respiratory infection, has garnered global attention due to its rapid transmission and relatively high mortality rates. Timely detection and treatment play a crucial role in significantly reducing mortality associated with pneumonia. Presently, X-ray diagnosis stands out as a reasonably effective method. However, the manual scrutiny of a patient's X-ray chest radiograph by a proficient practitioner usually requires 5 to 15 minutes. In situations where cases are concentrated, this places immense pressure on clinicians for timely diagnosis. Relying solely on the visual acumen of imaging doctors proves to be inefficient, particularly given the low speed of manual analysis. Therefore, the integration of artificial intelligence into the clinical image diagnosis of pneumonia becomes imperative. Additionally, AI recognition is notably rapid, with convolutional neural networks (CNNs) demonstrating superior performance compared to human counterparts in image identification tasks. To conduct our study, we utilized a dataset comprising chest X-ray images obtained from Kaggle, encompassing a total of 5216 training images and 624 test images, categorized into two classes: normal and pneumonia. Employing five mainstream network algorithms, we undertook a comprehensive analysis to classify these diseases within the dataset, subsequently comparing the results. The integration of artificial intelligence, particularly through improved network architectures, stands as a transformative step towards more efficient and accurate clinical diagnoses across various medical domains.Keywords: deep learning, computer vision, pneumonia, models, comparative study
Procedia PDF Downloads 6419669 Knee Pain Reduction: Holistic vs. Traditional
Authors: Renee Moten
Abstract:
Introduction: Knee pain becomes chronic because the therapy used focuses only on the symptoms of knee pain and not the causes of knee pain. Preventing knee injuries is not in the toolbox of the traditional practitioner. This research was done to show that we must reduce the inflammation (holistically), reduce the swelling and regain flexibility before considering any type of exercise. This method of performing the correct exercise stops the bowing of the knee, corrects the walking gait, and starts to relieve knee, hip, back, and shoulder pain. Method: The holistic method that is used to heal knees is called the Knee Pain Recipe. It’s a six step system that only uses alternative medicine methods to reduce, relieve and restore knee joint mobility. The system is low cost, with no hospital bills, no physical therapy, and no painkillers that can cause damage to the kidneys and liver. This method has been tested on 200 women with knee, back, hip, and shoulder pain. Results: All 200 women reduce their knee pain by 50%, some by as much as 90%. Learning about ankle and foot flexibility, along with understanding the kinetic chain, helps improve the walking gait, which takes the pressure off the knee, hip and back. The knee pain recipe also has helped to reduce the need for a cortisone injection, stem cell procedures, to take painkillers, and surgeries. What has also been noted in the research was that if the women's knees were too far gone, the Knee Pain Recipe helped prepare the women for knee replacement surgery. Conclusion: It is believed that the Knee Pain Recipe, when performed by men and women from around the world, will give them a holistic alternative to drugs, injections, and surgeries.Keywords: knee, surgery, healing, holistic
Procedia PDF Downloads 7519668 DeClEx-Processing Pipeline for Tumor Classification
Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba
Abstract:
Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.Keywords: machine learning, healthcare, classification, explainability
Procedia PDF Downloads 5619667 Effect of Assumptions of Normal Shock Location on the Design of Supersonic Ejectors for Refrigeration
Authors: Payam Haghparast, Mikhail V. Sorin, Hakim Nesreddine
Abstract:
The complex oblique shock phenomenon can be simply assumed as a normal shock at the constant area section to simulate a sharp pressure increase and velocity decrease in 1-D thermodynamic models. The assumed normal shock location is one of the greatest sources of error in ejector thermodynamic models. Most researchers consider an arbitrary location without justifying it. Our study compares the effect of normal shock place on ejector dimensions in 1-D models. To this aim, two different ejector experimental test benches, a constant area-mixing ejector (CAM) and a constant pressure-mixing (CPM) are considered, with different known geometries, operating conditions and working fluids (R245fa, R141b). In the first step, in order to evaluate the real value of the efficiencies in the different ejector parts and critical back pressure, a CFD model was built and validated by experimental data for two types of ejectors. These reference data are then used as input to the 1D model to calculate the lengths and the diameters of the ejectors. Afterwards, the design output geometry calculated by the 1D model is compared directly with the corresponding experimental geometry. It was found that there is a good agreement between the ejector dimensions obtained by the 1D model, for both CAM and CPM, with experimental ejector data. Furthermore, it is shown that normal shock place affects only the constant area length as it is proven that the inlet normal shock assumption results in more accurate length. Taking into account previous 1D models, the results suggest the use of the assumed normal shock location at the inlet of the constant area duct to design the supersonic ejectors.Keywords: 1D model, constant area-mixing, constant pressure-mixing, normal shock location, ejector dimensions
Procedia PDF Downloads 19419666 Study on Runoff Allocation Responsibilities of Different Land Uses in a Single Catchment Area
Authors: Chuan-Ming Tung, Jin-Cheng Fu, Chia-En Feng
Abstract:
In recent years, the rapid development of urban land in Taiwan has led to the constant increase of the areas of impervious surface, which has increased the risk of waterlogging during heavy rainfall. Therefore, in recent years, promoting runoff allocation responsibilities has often been used as a means of reducing regional flooding. In this study, the single catchment area covering both urban and rural land as the study area is discussed. Based on Storm Water Management Model, urban and rural land in a single catchment area was explored to develop the runoff allocation responsibilities according to their respective control regulation on land use. The impacts of runoff increment and reduction in sub-catchment area were studied to understand the impact of highly developed urban land on the reduction of flood risk of rural land at the back end. The results showed that the rainfall with 1 hour short delay of 2 years, 5 years, 10 years, and 25 years return period. If the study area was fully developed, the peak discharge at the outlet would increase by 24.46% -22.97% without runoff allocation responsibilities. The front-end urban land would increase runoff from back-end of rural land by 76.19% -46.51%. However, if runoff allocation responsibilities were carried out in the study area, the peak discharge could be reduced by 58.38-63.08%, which could make the front-end to reduce 54.05% -23.81% of the peak flow to the back-end. In addition, the researchers found that if it was seen from the perspective of runoff allocation responsibilities of per unit area, the residential area of urban land would benefit from the relevant laws and regulations of the urban system, which would have a better effect of reducing flood than the residential land in rural land. For rural land, the development scale of residential land was generally small, which made the effect of flood reduction better than that of industrial land. Agricultural land requires a large area of land, resulting in the lowest share of the flow per unit area. From the point of the planners, this study suggests that for the rural land around the city, its responsibility should be assigned to share the runoff. And setting up rain water storage facilities in the same way as urban land, can also take stock of agricultural land resources to increase the ridge of field for flood storage, in order to improve regional disaster reduction capacity and resilience.Keywords: runoff allocation responsibilities, land use, flood mitigation, SWMM
Procedia PDF Downloads 10419665 Suitable Models and Methods for the Steady-State Analysis of Multi-Energy Networks
Authors: Juan José Mesas, Luis Sainz
Abstract:
The motivation for the development of this paper lies in the need for energy networks to reduce losses, improve performance, optimize their operation and try to benefit from the interconnection capacity with other networks enabled for other energy carriers. These interconnections generate interdependencies between some energy networks and others, which requires suitable models and methods for their analysis. Traditionally, the modeling and study of energy networks have been carried out independently for each energy carrier. Thus, there are well-established models and methods for the steady-state analysis of electrical networks, gas networks, and thermal networks separately. What is intended is to extend and combine them adequately to be able to face in an integrated way the steady-state analysis of networks with multiple energy carriers. Firstly, the added value of multi-energy networks, their operation, and the basic principles that characterize them are explained. In addition, two current aspects of great relevance are exposed: the storage technologies and the coupling elements used to interconnect one energy network with another. Secondly, the characteristic equations of the different energy networks necessary to carry out the steady-state analysis are detailed. The electrical network, the natural gas network, and the thermal network of heat and cold are considered in this paper. After the presentation of the equations, a particular case of the steady-state analysis of a specific multi-energy network is studied. This network is represented graphically, the interconnections between the different energy carriers are described, their technical data are exposed and the equations that have previously been presented theoretically are formulated and developed. Finally, the two iterative numerical resolution methods considered in this paper are presented, as well as the resolution procedure and the results obtained. The pros and cons of the application of both methods are explained. It is verified that the results obtained for the electrical network (voltages in modulus and angle), the natural gas network (pressures), and the thermal network (mass flows and temperatures) are correct since they comply with the distribution, operation, consumption and technical characteristics of the multi-energy network under study.Keywords: coupling elements, energy carriers, multi-energy networks, steady-state analysis
Procedia PDF Downloads 7919664 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms
Authors: Mohammad Besharatloo
Abstract:
Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree
Procedia PDF Downloads 9119663 U-Net Based Multi-Output Network for Lung Disease Segmentation and Classification Using Chest X-Ray Dataset
Authors: Jaiden X. Schraut
Abstract:
Medical Imaging Segmentation of Chest X-rays is used for the purpose of identification and differentiation of lung cancer, pneumonia, COVID-19, and similar respiratory diseases. Widespread application of computer-supported perception methods into the diagnostic pipeline has been demonstrated to increase prognostic accuracy and aid doctors in efficiently treating patients. Modern models attempt the task of segmentation and classification separately and improve diagnostic efficiency; however, to further enhance this process, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional CNN module for auxiliary classification output. The proposed model achieves a final Jaccard Index of .9634 for image segmentation and a final accuracy of .9600 for classification on the COVID-19 radiography database.Keywords: chest X-ray, deep learning, image segmentation, image classification
Procedia PDF Downloads 14419662 Cost Analysis of Optimized Fast Network Mobility in IEEE 802.16e Networks
Authors: Seyyed Masoud Seyyedoshohadaei, Borhanuddin Mohd Ali
Abstract:
To support group mobility, the NEMO Basic Support Protocol has been standardized as an extension of Mobile IP that enables an entire network to change its point of attachment to the Internet. Using NEMO in IEEE 802.16e (WiMax) networks causes latency in handover procedure and affects seamless communication of real-time applications. To decrease handover latency and service disruption time, an integrated scheme named Optimized Fast NEMO (OFNEMO) was introduced by authors of this paper. In OFNEMO a pre-establish multi tunnels concept, cross function optimization and cross layer design are used. In this paper, an analytical model is developed to evaluate total cost consisting of signaling and packet delivery costs of the OFNEMO compared with RFC3963. Results show that OFNEMO increases probability of predictive mode compared with RFC3963 due to smaller handover latency. Even though OFNEMO needs extra signalling to pre-establish multi tunnel, it has less total cost thanks to its optimized algorithm. OFNEMO can minimize handover latency for supporting real time application in moving networks.Keywords: fast mobile IPv6, handover latency, IEEE802.16e, network mobility
Procedia PDF Downloads 19719661 CFD Modeling of Mixing Enhancement in a Pitted Micromixer by High Frequency Ultrasound Waves
Authors: Faezeh Mohammadi, Ebrahim Ebrahimi, Neda Azimi
Abstract:
Use of ultrasound waves is one of the techniques for increasing the mixing and mass transfer in the microdevices. Ultrasound propagation into liquid medium leads to stimulation of the fluid, creates turbulence and so increases the mixing performance. In this study, CFD modeling of two-phase flow in a pitted micromixer equipped with a piezoelectric with frequency of 1.7 MHz has been studied. CFD modeling of micromixer at different velocity of fluid flow in the absence of ultrasound waves and with ultrasound application has been performed. The hydrodynamic of fluid flow and mixing efficiency for using ultrasound has been compared with the layout of no ultrasound application. The result of CFD modeling shows well agreements with the experimental results. The results showed that the flow pattern inside the micromixer in the absence of ultrasound waves is parallel, while when ultrasound has been applied, it is not parallel. In fact, propagation of ultrasound energy into the fluid flow in the studied micromixer changed the hydrodynamic and the forms of the flow pattern and caused to mixing enhancement. In general, from the CFD modeling results, it can be concluded that the applying ultrasound energy into the liquid medium causes an increase in the turbulences and mixing and consequently, improves the mass transfer rate within the micromixer.Keywords: CFD modeling, ultrasound, mixing, mass transfer
Procedia PDF Downloads 18219660 Correlation between Speech Emotion Recognition Deep Learning Models and Noises
Authors: Leah Lee
Abstract:
This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16
Procedia PDF Downloads 7519659 Methods for Restricting Unwanted Access on the Networks Using Firewall
Authors: Bhagwant Singh, Sikander Singh Cheema
Abstract:
This paper examines firewall mechanisms routinely implemented for network security in depth. A firewall can't protect you against all the hazards of unauthorized networks. Consequently, many kinds of infrastructure are employed to establish a secure network. Firewall strategies have already been the subject of significant analysis. This study's primary purpose is to avoid unnecessary connections by combining the capability of the firewall with the use of additional firewall mechanisms, which include packet filtering and NAT, VPNs, and backdoor solutions. There are insufficient studies on firewall potential and combined approaches, but there aren't many. The research team's goal is to build a safe network by integrating firewall strength and firewall methods. The study's findings indicate that the recommended concept can form a reliable network. This study examines the characteristics of network security and the primary danger, synthesizes existing domestic and foreign firewall technologies, and discusses the theories, benefits, and disadvantages of different firewalls. Through synthesis and comparison of various techniques, as well as an in-depth examination of the primary factors that affect firewall effectiveness, this study investigated firewall technology's current application in computer network security, then introduced a new technique named "tight coupling firewall." Eventually, the article discusses the current state of firewall technology as well as the direction in which it is developing.Keywords: firewall strategies, firewall potential, packet filtering, NAT, VPN, proxy services, firewall techniques
Procedia PDF Downloads 10119658 Social and Economic Aspects of Unlikely but Still Possible Welfare to Work Transitions from Long-Term Unemployed
Authors: Andreas Hirseland, Lukas Kerschbaumer
Abstract:
In Germany, during the past years there constantly are about one million long term unemployed who did not benefit from the prospering labor market while most short term unemployed did. Instead, they are continuously dependent on welfare and sometimes precarious short-term employment, experiencing work poverty. Long term unemployment thus turns into a main obstacle to regular employment, especially if accompanied by other impediments such as low level education (school/vocational), poor health (especially chronical illness), advanced age (older than fifty), immigrant status, motherhood or engagement in care for other relatives. Almost two thirds of all welfare recipients have multiple impediments which hinder a successful transition from welfare back to sustainable and sufficient employment. Hiring them is often considered as an investment too risky for employers. Therefore formal application schemes based on formal qualification certificates and vocational biographies might reduce employers’ risks but at the same time are not helpful for long-term unemployed and welfare recipients. The panel survey ‘Labor market and social security’ (PASS; ~15,000 respondents in ~10,000 households), carried out by the Institute of Employment Research (the research institute of the German Federal Labor Agency), shows that their chance to get back to work tends to fall to nil. Only 66 cases of such unlikely transitions could be observed. In a sequential explanatory mixed-method study, the very scarce ‘success stories’ of unlikely transitions from long term unemployment to work were explored by qualitative inquiry – in-depth interviews with a focus on biography accompanied by qualitative network techniques in order to get a more detailed insight of relevant actors involved in the processes which promote the transition from being a welfare recipient to work. There is strong evidence that sustainable transitions are influenced by biographical resources like habits of network use, a set of informal skills and particularly a resilient way of dealing with obstacles, combined with contextual factors rather than by job-placement procedures promoted by Job-Centers according to activation rules or by following formal paths of application. On the employer’s side small and medium-sized enterprises are often found to give job opportunities to a wider variety of applicants, often based on a slow but steadily increasing relationship leading to employment. According to these results it is possible to show and discuss some limitations of (German) activation policies targeting welfare dependency and long-term unemployment. Based on these findings, indications for more supportive small scale measures in the field of labor-market policies are suggested to help long-term unemployed with multiple impediments to overcome their situation.Keywords: against-all-odds, economic sociology, long-term unemployment, mixed-methods
Procedia PDF Downloads 23819657 Strengthening by Assessment: A Case Study of Rail Bridges
Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas
Abstract:
The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening
Procedia PDF Downloads 30919656 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla
Abstract:
Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time
Procedia PDF Downloads 17719655 Network Pharmacological Evaluation of Holy Basil Bioactive Phytochemicals for Identifying Novel Potential Inhibitors Against Neurodegenerative Disorder
Authors: Bhuvanesh Baniya
Abstract:
Alzheimer disease is illnesses that are responsible for neuronal cell death and resulting in lifelong cognitive problems. Due to their unclear mechanism, there are no effective drugs available for the treatment. For a long time, herbal drugs have been used as a role model in the field of the drug discovery process. Holy basil in the Indian medicinal system (Ayurveda) is used for several neuronal disorders like insomnia and memory loss for decades. This study aims to identify active components of holy basil as potential inhibitors for the treatment of Alzheimer disease. To fulfill this objective, the Network pharmacology approach, gene ontology, pharmacokinetics analysis, molecular docking, and molecular dynamics simulation (MDS) studies were performed. A total of 7 active components in holy basil, 12 predicted neurodegenerative targets of holy basil, and 8063 Alzheimer-related targets were identified from different databases. The network analysis showed that the top ten targets APP, EGFR, MAPK1, ESR1, HSPA4, PRKCD, MAPK3, ABL1, JUN, and GSK3B were found as significant target related to Alzheimer disease. On the basis of gene ontology and topology analysis results, APP was found as a significant target related to Alzheimer’s disease pathways. Further, the molecular docking results to found that various compounds showed the best binding affinities. Further, MDS top results suggested could be used as potential inhibitors against APP protein and could be useful for the treatment of Alzheimer’s disease.Keywords: holy basil, network pharmacology, neurodegeneration, active phytochemicals, molecular docking and simulation
Procedia PDF Downloads 10119654 A Relational Approach to Adverb Use in Interactions
Authors: Guillaume P. Fernandez
Abstract:
Individual language use is a matter of choice in particular interactions. The paper proposes a conceptual and theoretical framework with methodological consideration to develop how language produced in dyadic relations is to be considered and situated in the larger social configuration the interaction is embedded within. An integrated and comprehensive view is taken: social interactions are expected to be ruled by a normative context, defined by the chain of interdependences that structures the personal network. In this approach, the determinants of discursive practices are not only constrained by the moment of production and isolated from broader influences. Instead, the position the individual and the dyad have in the personal network influences the discursive practices in a twofold manner: on the one hand, the network limits the access to linguistic resources available within it, and, on the other hand, the structure of the network influences the agency of the individual, by the social control inherent to particular network characteristics. Concretely, we investigate how and to what extent consistent ego is from one interaction to another in his or her use of adverbs. To do so, social network analysis (SNA) methods are mobilized. Participants (N=130) are college students recruited in the french speaking part of Switzerland. The personal network of significant ones of each individual is created using name generators and edge interpreters, with a focus on social support and conflict. For the linguistic parts, respondents were asked to record themselves with five of their close relations. From the recordings, we computed an average similarity score based on the adverb used across interactions. In terms of analyses, two are envisaged: First, OLS regressions including network-level measures, such as density and reciprocity, and individual-level measures, such as centralities, are performed to understand the tenets of linguistic similarity from one interaction to another. The second analysis considers each social tie as nested within ego networks. Multilevel models are performed to investigate how the different types of ties may influence the likelihood to use adverbs, by controlling structural properties of the personal network. Primary results suggest that the more cohesive the network, the less likely is the individual to change his or her manner of speaking, and social support increases the use of adverbs in interactions. While promising results emerge, further research should consider a longitudinal approach to able the claim of causality.Keywords: personal network, adverbs, interactions, social influence
Procedia PDF Downloads 6719653 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video
Authors: Nidhal K. Azawi, John M. Gauch
Abstract:
Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.Keywords: colonoscopy classification, feature extraction, image alignment, machine learning
Procedia PDF Downloads 25319652 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree
Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli
Abstract:
Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture
Procedia PDF Downloads 42019651 A POX Controller Module to Prepare a List of Flow Header Information Extracted from SDN Traffic
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a paradigm designed to facilitate the way of controlling the network dynamically and with more agility. Network traffic is a set of flows, each of which contains a set of packets. In SDN, a matching process is performed on every packet coming to the network in the SDN switch. Only the headers of the new packets will be forwarded to the SDN controller. In terminology, the flow header fields are called tuples. Basically, these tuples are 5-tuple: the source and destination IP addresses, source and destination ports, and protocol number. This flow information is used to provide an overview of the network traffic. Our module is meant to extract this 5-tuple with the packets and flows numbers and show them as a list. Therefore, this list can be used as a first step in the way of detecting the DDoS attack. Thus, this module can be considered as the beginning stage of any flow-based DDoS detection method.Keywords: matching, OpenFlow tables, POX controller, SDN, table-miss
Procedia PDF Downloads 19919650 Transmission Line Protection Challenges under High Penetration of Renewable Energy Sources and Proposed Solutions: A Review
Authors: Melake Kuflom
Abstract:
European power networks involve the use of multiple overhead transmission lines to construct a highly duplicated system that delivers reliable and stable electrical energy to the distribution level. The transmission line protection applied in the existing GB transmission network are normally independent unit differential and time stepped distance protection schemes, referred to as main-1 & main-2 respectively, with overcurrent protection as a backup. The increasing penetration of renewable energy sources, commonly referred as “weak sources,” into the power network resulted in the decline of fault level. Traditionally, the fault level of the GB transmission network has been strong; hence the fault current contribution is more than sufficient to ensure the correct operation of the protection schemes. However, numerous conventional coal and nuclear generators have been or about to shut down due to the societal requirement for CO2 emission reduction, and this has resulted in a reduction in the fault level on some transmission lines, and therefore an adaptive transmission line protection is required. Generally, greater utilization of renewable energy sources generated from wind or direct solar energy results in a reduction of CO2 carbon emission and can increase the system security and reliability but reduces the fault level, which has an adverse effect on protection. Consequently, the effectiveness of conventional protection schemes under low fault levels needs to be reviewed, particularly for future GB transmission network operating scenarios. The proposed paper will evaluate the transmission line challenges under high penetration of renewable energy sources andprovides alternative viable protection solutions based on the problem observed. The paper will consider the assessment ofrenewable energy sources (RES) based on a fully rated converter technology. The DIgSILENT Power Factory software tool will be used to model the network.Keywords: fault level, protection schemes, relay settings, relay coordination, renewable energy sources
Procedia PDF Downloads 20619649 A Framework for Defining Innovation Districts: A Case Study of 22@ Barcelona
Authors: Arnault Morisson
Abstract:
Innovation districts are being implemented as urban regeneration strategies in cities as diverse as Barcelona (Spain), Boston (Massachusetts), Chattanooga (Tennessee), Detroit (Michigan), Medellin (Colombia), and Montréal (Canada). Little, however, is known about the concept. This paper aims to provide a framework to define innovation districts. The research methodology is based on a qualitative approach using 22@ Barcelona as a case study. 22@ Barcelona was the first innovation district ever created and has been a model for the innovation districts of Medellin (Colombia) and Boston (Massachusetts) among others. Innovation districts based on the 22@ Barcelona’s model can be defined as top-down urban innovation ecosystems designed around four multilayered and multidimensional models of innovation: urban planning, productive, collaborative, and creative, all coordinated under strong leadership, with the ultimate objectives to accelerate the innovation process and competitiveness of a locality. Innovation districts aim to respond to a new economic paradigm in which economic production flows back to cities.Keywords: innovation ecosystem, governance, technology park, urban planning, urban policy, urban regeneration
Procedia PDF Downloads 37219648 Quality-Of-Service-Aware Green Bandwidth Allocation in Ethernet Passive Optical Network
Authors: Tzu-Yang Lin, Chuan-Ching Sue
Abstract:
Sleep mechanisms are commonly used to ensure the energy efficiency of each optical network unit (ONU) that concerns a single class delay constraint in the Ethernet Passive Optical Network (EPON). How long the ONUs can sleep without violating the delay constraint has become a research problem. Particularly, we can derive an analytical model to determine the optimal sleep time of ONUs in every cycle without violating the maximum class delay constraint. The bandwidth allocation considering such optimal sleep time is called Green Bandwidth Allocation (GBA). Although the GBA mechanism guarantees that the different class delay constraints do not violate the maximum class delay constraint, packets with a more relaxed delay constraint will be treated as those with the most stringent delay constraint and may be sent early. This means that the ONU will waste energy in active mode to send packets in advance which did not need to be sent at the current time. Accordingly, we proposed a QoS-aware GBA using a novel intra-ONU scheduling to control the packets to be sent according to their respective delay constraints, thereby enhancing energy efficiency without deteriorating delay performance. If packets are not explicitly classified but with different packet delay constraints, we can modify the intra-ONU scheduling to classify packets according to their packet delay constraints rather than their classes. Moreover, we propose the switchable ONU architecture in which the ONU can switch the architecture according to the sleep time length, thus improving energy efficiency in the QoS-aware GBA. The simulation results show that the QoS-aware GBA ensures that packets in different classes or with different delay constraints do not violate their respective delay constraints and consume less power than the original GBA.Keywords: Passive Optical Networks, PONs, Optical Network Unit, ONU, energy efficiency, delay constraint
Procedia PDF Downloads 28419647 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 29419646 Respiratory Bioaerosol Dynamics: Impact of Salinity on Evaporation
Authors: Akhil Teja Kambhampati, Mark A. Hoffman
Abstract:
In the realm of infectious disease research, airborne viral transmission stands as a paramount concern due to its pivotal role in propagating pathogens within densely populated regions. However, amidst this landscape, the phenomenon of hygroscopic growth within respiratory bioaerosols remains relatively underexplored. Unlike pure water aerosols, the unique composition of respiratory bioaerosols leads to varied evaporation rates and hygroscopic growth patterns, influenced by factors such as ambient humidity, temperature, and airflow. This study addresses this gap by focusing on the behaviors of single respiratory bioaerosol utilizing salinity to induce saliva-like hygroscopic behavior. By employing mass, momentum, and energy equations, the study unveils the intricate interplay between evaporation and hygroscopic growth over time. The numerical model enables temporal analysis of bioaerosol characteristics, including size, temperature, and trajectory. The analysis reveals that due to evaporation, there is a reduction in initial size, which shortens the lifetime and distance traveled. However, when hygroscopic growth begins to influence the bioaerosol size, the rate of size reduction slows significantly. The interplay between evaporation and hygroscopic growth results in bioaerosol size within the inhalation range of humans and prolongs the traveling distance. Findings procured from the analysis are crucial for understanding the spread of infectious diseases, especially in high-risk environments such as healthcare facilities and public transportation systems. By elucidating the nuanced behaviors of respiratory bioaerosols, this study seeks to inform the development of more effective preventative strategies against pathogens propagation in the air, thereby contributing to public health efforts on a global scale.Keywords: airborne viral transmission, high-risk environments, hygroscopic growth, evaporation, numerical modeling, pathogen propagation, preventative strategies, public health, respiratory bioaerosols
Procedia PDF Downloads 4019645 Artificial Neurons Based on Memristors for Spiking Neural Networks
Authors: Yan Yu, Wang Yu, Chen Xintong, Liu Yi, Zhang Yanzhong, Wang Yanji, Chen Xingyu, Zhang Miaocheng, Tong Yi
Abstract:
Neuromorphic computing based on spiking neural networks (SNNs) has emerged as a promising avenue for building the next generation of intelligent computing systems. Owing to its high-density integration, low power, and outstanding nonlinearity, memristors have attracted emerging attention on achieving SNNs. However, fabricating a low-power and robust memristor-based spiking neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a TiO₂-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, used to realize single layer fully connected (FC) SNNs. Moreover, our TiO₂-based resistive switching (RS) memristors realize spiking-time-dependent-plasticity (STDP), originating from the Ag diffusion-based filamentary mechanism. This work demonstrates that TiO2-based memristors may provide an efficient method to construct hardware neuromorphic computing systems.Keywords: leaky integrate-and-fire, memristor, spiking neural networks, spiking-time-dependent-plasticity
Procedia PDF Downloads 13419644 The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0
Authors: Chen Xi, Liu Xuebing, Lao Xueru, Kuan Sinman, Jiang Yike, Wang Hanwei, Yang Xiaolang, Zhou Junjie, Xie Jinpeng
Abstract:
To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behavior recognition models, to provide empirical data such as 'pedestrian flow data and human behavioral characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making.Keywords: urban planning, urban governance, CIM, artificial intelligence, sustainable development
Procedia PDF Downloads 41919643 A Cloud-Based Federated Identity Management in Europe
Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas
Abstract:
Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).Keywords: cybersecurity, identity federation, trust, user authentication
Procedia PDF Downloads 16619642 Monitoring Memories by Using Brain Imaging
Authors: Deniz Erçelen, Özlem Selcuk Bozkurt
Abstract:
The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons
Procedia PDF Downloads 8619641 Evaluating the Perception of Roma in Europe through Social Network Analysis
Authors: Giulia I. Pintea
Abstract:
The Roma people are a nomadic ethnic group native to India, and they are one of the most prevalent minorities in Europe. In the past, Roma were enslaved and they were imprisoned in concentration camps during the Holocaust; today, Roma are subject to hate crimes and are denied access to healthcare, education, and proper housing. The aim of this project is to analyze how the public perception of the Roma people may be influenced by antiziganist and pro-Roma institutions in Europe. In order to carry out this project, we used social network analysis to build two large social networks: The antiziganist network, which is composed of institutions that oppress and racialize Roma, and the pro-Roma network, which is composed of institutions that advocate for and protect Roma rights. Measures of centrality, density, and modularity were obtained to determine which of the two social networks is exerting the greatest influence on the public’s perception of Roma in European societies. Furthermore, data on hate crimes on Roma were gathered from the Organization for Security and Cooperation in Europe (OSCE). We analyzed the trends in hate crimes on Roma for several European countries for 2009-2015 in order to see whether or not there have been changes in the public’s perception of Roma, thus helping us evaluate which of the two social networks has been more influential. Overall, the results suggest that there is a greater and faster exchange of information in the pro-Roma network. However, when taking the hate crimes into account, the impact of the pro-Roma institutions is ambiguous, due to differing patterns among European countries, suggesting that the impact of the pro-Roma network is inconsistent. Despite antiziganist institutions having a slower flow of information, the hate crime patterns also suggest that the antiziganist network has a higher impact on certain countries, which may be due to institutions outside the political sphere boosting the spread of antiziganist ideas and information to the European public.Keywords: applied mathematics, oppression, Roma people, social network analysis
Procedia PDF Downloads 277