Search results for: adaptive robust rbf neural network approximation
4916 Implementing Mindfulness into Wellness Plans: Assisting Individuals with Substance Abuse and Addiction
Authors: Michele M. Mahr
Abstract:
The purpose of this study is to educate, inform, and facilitate scholarly conversation and discussion regarding the implementation of mindfulness techniques when working with individuals with substance use disorder (SUD) or addictive behaviors in mental health. Mindfulness can be recognized as the present moment, non-judgmental awareness, initiated by concentrated attention that is non-reactive and as openheartedly as possible. Individuals with SUD or addiction typically are challenged with triggers, environmental situations, cravings, or social pressures which may deter them from remaining abstinent from their drug of choice or addictive behavior. Also, mindfulness is recognized as one of the cognitive and behavioral treatment approaches and is both a physical and mental practice that encompasses individuals to become aware of internal situations and experiences with undivided attention. That said, mindfulness may be an effective strategy for individuals to employ during these experiences. This study will reveal how mental health practitioners and addiction counselors may find mindfulness to be an essential component of increasing wellness when working with individuals seeking mental health treatment. To this end, mindfulness is simply the ability individuals have to know what is actually happening as it is occurring and what they are experiencing at the moment. In the context of substance abuse and addiction, individuals may employ breathing techniques, meditation, and cognitive restructuring of the mind to become aware of present moment experiences. Furthermore, the notion of mindfulness has been directly connected to the development of neuropathways. The creation of the neural pathways then leads to creating thoughts which leads to developing new coping strategies and adaptive behaviors. Mindfulness strategies can assist individuals in connecting the mind with the body, allowing the individual to remain centered and focused. All of these mentioned above are vital components to recovery during substance abuse and addiction treatment. There are a variety of therapeutic modalities applying the key components of mindfulness, such as Mindfulness-Based Stress Reduction (MBSR) and Mindfulness-Based Cognitive Therapy for depression (MBCT). This study will provide an overview of both MBSR and MBCT in relation to treating individuals with substance abuse and addiction. The author will also provide strategies for readers to employ when working with clients. Lastly, the author will create and foster a safe space for discussion and engaging conversation among participants to ask questions, share perspectives, and be educated on the numerous benefits of mindfulness within wellness.Keywords: mindfulness, wellness, substance abuse, mental health
Procedia PDF Downloads 774915 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 674914 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 114913 Fuzzy Rules Based Improved BEENISH Protocol for Wireless Sensor Networks
Authors: Rishabh Sharma
Abstract:
The main design parameter of WSN (wireless sensor network) is the energy consumption. To compensate this parameter, hierarchical clustering is a technique that assists in extending duration of the networks life by efficiently consuming the energy. This paper focuses on dealing with the WSNs and the FIS (fuzzy interface system) which are deployed to enhance the BEENISH protocol. The node energy, mobility, pause time and density are considered for the selection of CH (cluster head). The simulation outcomes exhibited that the projected system outperforms the traditional system with regard to the energy utilization and number of packets transmitted to sink.Keywords: wireless sensor network, sink, sensor node, routing protocol, fuzzy rule, fuzzy inference system
Procedia PDF Downloads 1064912 Time Series Analysis the Case of China and USA Trade Examining during Covid-19 Trade Enormity of Abnormal Pricing with the Exchange rate
Authors: Md. Mahadi Hasan Sany, Mumenunnessa Keya, Sharun Khushbu, Sheikh Abujar
Abstract:
Since the beginning of China's economic reform, trade between the U.S. and China has grown rapidly, and has increased since China's accession to the World Trade Organization in 2001. The US imports more than it exports from China, reducing the trade war between China and the U.S. for the 2019 trade deficit, but in 2020, the opposite happens. In international and U.S. trade, Washington launched a full-scale trade war against China in March 2016, which occurred a catastrophic epidemic. The main goal of our study is to measure and predict trade relations between China and the U.S., before and after the arrival of the COVID epidemic. The ML model uses different data as input but has no time dimension that is present in the time series models and is only able to predict the future from previously observed data. The LSTM (a well-known Recurrent Neural Network) model is applied as the best time series model for trading forecasting. We have been able to create a sustainable forecasting system in trade between China and the US by closely monitoring a dataset published by the State Website NZ Tatauranga Aotearoa from January 1, 2015, to April 30, 2021. Throughout the survey, we provided a 180-day forecast that outlined what would happen to trade between China and the US during COVID-19. In addition, we have illustrated that the LSTM model provides outstanding outcome in time series data analysis rather than RFR and SVR (e.g., both ML models). The study looks at how the current Covid outbreak affects China-US trade. As a comparative study, RMSE transmission rate is calculated for LSTM, RFR and SVR. From our time series analysis, it can be said that the LSTM model has given very favorable thoughts in terms of China-US trade on the future export situation.Keywords: RFR, China-U.S. trade war, SVR, LSTM, deep learning, Covid-19, export value, forecasting, time series analysis
Procedia PDF Downloads 1984911 Self-Organizing Maps for Credit Card Fraud Detection
Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 574910 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 1874909 Sign Language Recognition of Static Gestures Using Kinect™ and Convolutional Neural Networks
Authors: Rohit Semwal, Shivam Arora, Saurav, Sangita Roy
Abstract:
This work proposes a supervised framework with deep convolutional neural networks (CNNs) for vision-based sign language recognition of static gestures. Our approach addresses the acquisition and segmentation of correct inputs for the CNN-based classifier. Microsoft Kinect™ sensor, despite complex environmental conditions, can track hands efficiently. Skin Colour based segmentation is applied on cropped images of hands in different poses, used to depict different sign language gestures. The segmented hand images are used as an input for our classifier. The CNN classifier proposed in the paper is able to classify the input images with a high degree of accuracy. The system was trained and tested on 39 static sign language gestures, including 26 letters of the alphabet and 13 commonly used words. This paper includes a problem definition for building the proposed system, which acts as a sign language translator between deaf/mute and the rest of the society. It is then followed by a focus on reviewing existing knowledge in the area and work done by other researchers. It also describes the working principles behind different components of CNNs in brief. The architecture and system design specifications of the proposed system are discussed in the subsequent sections of the paper to give the reader a clear picture of the system in terms of the capability required. The design then gives the top-level details of how the proposed system meets the requirements.Keywords: sign language, CNN, HCI, segmentation
Procedia PDF Downloads 1574908 The Effective Use of the Network in the Distributed Storage
Authors: Mamouni Mohammed Dhiya Eddine
Abstract:
This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface
Procedia PDF Downloads 2194907 A Hyperexponential Approximation to Finite-Time and Infinite-Time Ruin Probabilities of Compound Poisson Processes
Authors: Amir T. Payandeh Najafabadi
Abstract:
This article considers the problem of evaluating infinite-time (or finite-time) ruin probability under a given compound Poisson surplus process by approximating the claim size distribution by a finite mixture exponential, say Hyperexponential, distribution. It restates the infinite-time (or finite-time) ruin probability as a solvable ordinary differential equation (or a partial differential equation). Application of our findings has been given through a simulation study.Keywords: ruin probability, compound poisson processes, mixture exponential (hyperexponential) distribution, heavy-tailed distributions
Procedia PDF Downloads 3414906 Cultural Regeneration and Social Impacts of Industrial Heritage Transformation: The Case of Westergasfabriek Cultural Park, Netherland
Authors: Hsin Hua He
Abstract:
The purpose of this study is to strengthen the social cohesion of the local community by injecting the cultural and creative concept into the industrial heritage transformation. The paradigms of industrial heritage research tend to explore from the perspective of space analysis, which concerned less about the cultural regeneration and the development of local culture. The paradigms of cultural quarter research use to from the perspective of creative economy and urban planning, concerned less about the social impacts and the interaction between residents and industrial sites. This research combines these two research areas of industrial heritage and cultural quarter, and focus on the social and cultural aspects. The transformation from the industrial heritage into a cultural park not only enhances the cultural capital and the quality of residents’ lives, but also preserves the unique local values. Internally it shapes the local identity, while externally establishes the image of the city. This paper uses Westergasfabriek Cultural Park in Amsterdam as the case study, through literature analysis, field work, and depth interview to explore how the cultural regeneration transforms industrial heritage. In terms of the planners’ and residents’ point of view adopt the theory of community participation, social capital, and sense of place to analyze the social impact of the industrial heritage transformation. The research finding is through cultural regeneration policies like holding cultural activities, building up public space, social network and public-private partnership, and adopting adaptive reuse to fulfil the people’s need and desire and reach the social cohesion. Finally, the study will examine the transformation of Taiwan's industrial heritage into cultural and creative quarters. The results are expected to use the operating experience of the Amsterdam cases and provide directions for Taiwan’s industrial heritage management to meet the cultural, social, economic symbiosis.Keywords: cultural regeneration, community participation, social capital, sense of place, industrial heritage transformation
Procedia PDF Downloads 5044905 An Approach towards Smart Future: Ict Infrastructure Integrated into Urban Water Networks
Authors: Ahsan Ali, Mayank Ostwal, Nikhil Agarwal
Abstract:
Abstract—According to a World Bank report, millions of people across the globe still do not have access to improved water services. With uninterrupted growth of cities and urban inhabitants, there is a mounting need to safeguard the sustainable expansion of cities. Efficient functioning of the urban components and high living standards of the residents are needed to be ensured. The water and sanitation network of an urban development is one of its most essential parts of its critical infrastructure. The growth in urban population is leading towards increased water demand, and thus, the local water resources are severely strained. 'Smart water' is referred to water and waste water infrastructure that is able to manage the limited resources and the energy used to transport it. It enables the sustainable consumption of water resources through co-ordinate water management system, by integrating Information Communication Technology (ICT) solutions, intended at maximizing the socioeconomic benefits without compromising the environmental values. This paper presents a case study from a medium sized city in North-western Pakistan. Currently, water is getting contaminated due to the proximity between water and sewer pipelines in the study area, leading to public health issues. Due to unsafe grey water infiltration, the scarce ground water is also getting polluted. This research takes into account the design of smart urban water network by integrating ICT (Information and Communication Technology) with urban water network. The proximity between the existing water supply network and sewage network is analyzed and a design of new water supply system is proposed. Real time mapping of the existing urban utility networks will be projected with the help of GIS applications. The issue of grey water infiltration is addressed by providing sustainable solutions with the help of locally available materials, keeping in mind the economic condition of the area. To deal with the current growth of urban population, it is vital to develop new water resources. Hence, distinctive and cost effective procedures to harness rain water would be suggested as a part of the research study experiment.Keywords: GIS, smart water, sustainability, urban water management
Procedia PDF Downloads 2174904 Copper Price Prediction Model for Various Economic Situations
Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin
Abstract:
Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.Keywords: copper prices, prediction model, neural network, time series forecasting
Procedia PDF Downloads 1134903 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 1394902 Blockchain-Based Approach on Security Enhancement of Distributed System in Healthcare Sector
Authors: Loong Qing Zhe, Foo Jing Heng
Abstract:
A variety of data files are now available on the internet due to the advancement of technology across the globe today. As more and more data are being uploaded on the internet, people are becoming more concerned that their private data, particularly medical health records, are being compromised and sold to others for money. Hence, the accessibility and confidentiality of patients' medical records have to be protected through electronic means. Blockchain technology is introduced to offer patients security against adversaries or unauthorised parties. In the blockchain network, only authorised personnel or organisations that have been validated as nodes may share information and data. For any change within the network, including adding a new block or modifying existing information about the block, a majority of two-thirds of the vote is required to confirm its legitimacy. Additionally, a consortium permission blockchain will connect all the entities within the same community. Consequently, all medical data in the network can be safely shared with all authorised entities. Also, synchronization can be performed within the cloud since the data is real-time. This paper discusses an efficient method for storing and sharing electronic health records (EHRs). It also examines the framework of roles within the blockchain and proposes a new approach to maintain EHRs with keyword indexes to search for patients' medical records while ensuring data privacy.Keywords: healthcare sectors, distributed system, blockchain, electronic health records (EHR)
Procedia PDF Downloads 1914901 Performance Analysis and Energy Consumption of Routing Protocol in Manet Using Grid Topology
Authors: Vivek Kumar Singh, Tripti Singh
Abstract:
An ad hoc wireless network consists of mobile networks which creates an underlying architecture for communication without the help of traditional fixed-position routers. Ad-hoc On-demand Distance Vector (AODV) is a routing protocol used for Mobile Ad hoc Network (MANET). Nevertheless, the architecture must maintain communication routes although the hosts are mobile and they have limited transmission range. There are different protocols for handling the routing in the mobile environment. Routing protocols used in fixed infrastructure networks cannot be efficiently used for mobile ad-hoc networks, so that MANET requires different protocols. This paper presents the performance analysis of the routing protocols used various parameter-patterns with Two-ray model.Keywords: AODV, packet transmission rate, pause time, ZRP, QualNet 6.1
Procedia PDF Downloads 8284900 Performance Analysis of Traffic Classification with Machine Learning
Authors: Htay Htay Yi, Zin May Aye
Abstract:
Network security is role of the ICT environment because malicious users are continually growing that realm of education, business, and then related with ICT. The network security contravention is typically described and examined centrally based on a security event management system. The firewalls, Intrusion Detection System (IDS), and Intrusion Prevention System are becoming essential to monitor or prevent of potential violations, incidents attack, and imminent threats. In this system, the firewall rules are set only for where the system policies are needed. Dataset deployed in this system are derived from the testbed environment. The traffic as in DoS and PortScan traffics are applied in the testbed with firewall and IDS implementation. The network traffics are classified as normal or attacks in the existing testbed environment based on six machine learning classification methods applied in the system. It is required to be tested to get datasets and applied for DoS and PortScan. The dataset is based on CICIDS2017 and some features have been added. This system tested 26 features from the applied dataset. The system is to reduce false positive rates and to improve accuracy in the implemented testbed design. The system also proves good performance by selecting important features and comparing existing a dataset by machine learning classifiers.Keywords: false negative rate, intrusion detection system, machine learning methods, performance
Procedia PDF Downloads 1184899 Revitalization Strategy of Beijing-Tianjin-Hebei Rural Areas Organized by Production-Living-Ecology Spatial Network at Township Level
Authors: Liuhui Zhu, Peng Zeng
Abstract:
The rural revitalization strategy means to take the country and the city on the same level, and achieve urban-rural integration and comprehensive development of rural areas. Beijing-Tianjin-Hebei rural areas have always been the weak links in the region, with prominently uneven development between urban and rural areas. The rural areas need to join the overall regional synergy. Based on the analysis of the characteristics and problems of rural development in the region from the perspective of production-living-ecology space, the paper proposes the township as the basic unit for rural revitalization according to the overall requirements of the rural revitalization strategy. The basic unit helps to realize resource arrangement, functional organization, and collaborative governance organized by the production-living-ecology spatial network. The paper summarizes the planning strategies for the basic unit. Through spatial cognition and spatial reconstruction, the three space is networked through the base, nodes, and connections to improve the comprehensive value of rural areas and achieve the multiple goals of rural revitalization.Keywords: rural revitalization, Beijing-Tianjin-Hebei region, township level, production-living-ecology spatial network
Procedia PDF Downloads 1954898 Analysis of Causality between Defect Causes Using Association Rule Mining
Authors: Sangdeok Lee, Sangwon Han, Changtaek Hyun
Abstract:
Construction defects are major components that result in negative impacts on project performance including schedule delays and cost overruns. Since construction defects generally occur when a few associated causes combine, a thorough understanding of defect causality is required in order to more systematically prevent construction defects. To address this issue, this paper uses association rule mining (ARM) to quantify the causality between defect causes, and social network analysis (SNA) to find indirect causality among them. The suggested approach is validated with 350 defect instances from concrete works in 32 projects in Korea. The results show that the interrelationships revealed by the approach reflect the characteristics of the concrete task and the important causes that should be prevented.Keywords: causality, defect causes, social network analysis, association rule mining
Procedia PDF Downloads 3674897 The Latency-Amplitude Binomial of Waves Resulting from the Application of Evoked Potentials for the Diagnosis of Dyscalculia
Authors: Maria Isabel Garcia-Planas, Maria Victoria Garcia-Camba
Abstract:
Recent advances in cognitive neuroscience have allowed a step forward in perceiving the processes involved in learning from the point of view of the acquisition of new information or the modification of existing mental content. The evoked potentials technique reveals how basic brain processes interact to achieve adequate and flexible behaviours. The objective of this work, using evoked potentials, is to study if it is possible to distinguish if a patient suffers a specific type of learning disorder to decide the possible therapies to follow. The methodology used, is the analysis of the dynamics of different areas of the brain during a cognitive activity to find the relationships between the different areas analyzed in order to better understand the functioning of neural networks. Also, the latest advances in neuroscience have revealed the existence of different brain activity in the learning process that can be highlighted through the use of non-invasive, innocuous, low-cost and easy-access techniques such as, among others, the evoked potentials that can help to detect early possible neuro-developmental difficulties for their subsequent assessment and cure. From the study of the amplitudes and latencies of the evoked potentials, it is possible to detect brain alterations in the learning process specifically in dyscalculia, to achieve specific corrective measures for the application of personalized psycho pedagogical plans that allow obtaining an optimal integral development of the affected people.Keywords: dyscalculia, neurodevelopment, evoked potentials, Learning disabilities, neural networks
Procedia PDF Downloads 1404896 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks
Authors: Salman Naseer, Usman Zafar, Iqra Zafar
Abstract:
This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.Keywords: AODV, DSDV, DSR, Adhoc network
Procedia PDF Downloads 2864895 Cooperative Communication of Energy Harvesting Synchronized-OOK IR-UWB Based Tags
Authors: M. A. Mulatu, L. C. Chang, Y. S. Han
Abstract:
Energy harvesting tags with cooperative communication capabilities are emerging as possible infrastructure for internet of things (IoT) applications. This paper studies about the \ cooperative transmission strategy for a network of energy harvesting active networked tags (EnHANTs), that is adapted to the available energy resource and identification request. We consider a network of EnHANT-equipped objects to communicate with the destination either directly or by cooperating with neighboring objects. We formulate the the problem as a Markov decision process (MDP) under synchronised On/Off keying (S-OOK) pulse modulation format. The simulation results are provided to show the the performance of the cooperative transmission policy and compared against the greedy and conservative policies of single-link transmission.Keywords: cooperative communication, transmission strategy, energy harvesting, Markov decision process, value iteration
Procedia PDF Downloads 4924894 Self-Organizing Maps for Credit Card Fraud Detection and Visualization
Authors: Peng Chun-Yi, Chen Wei-Hsuan, Ueng Shyh-Kuang
Abstract:
This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies
Procedia PDF Downloads 594893 Artificial Intelligence in Penetration Testing of a Connected and Autonomous Vehicle Network
Authors: Phillip Garrad, Saritha Unnikrishnan
Abstract:
The recent popularity of connected and autonomous vehicles (CAV) corresponds with an increase in the risk of cyber-attacks. These cyber-attacks have been instigated by both researchers or white-coat hackers and cyber-criminals. As Connected Vehicles move towards full autonomy, the impact of these cyber-attacks also grows. The current research details challenges faced in cybersecurity testing of CAV, including access and cost of the representative test setup. Other challenges faced are lack of experts in the field. Possible solutions to how these challenges can be overcome are reviewed and discussed. From these findings, a software simulated CAV network is established as a cost-effective representative testbed. Penetration tests are then performed on this simulation, demonstrating a cyber-attack in CAV. Studies have shown Artificial Intelligence (AI) to improve runtime, increase efficiency and comprehensively cover all the typical test aspects in penetration testing in other industries. There is an attempt to introduce similar AI models to the software simulation. The expectation from this implementation is to see similar improvements in runtime and efficiency for the CAV model. If proven to be an effective means of penetration test for CAV, this methodology may be used on a full CAV test network.Keywords: cybersecurity, connected vehicles, software simulation, artificial intelligence, penetration testing
Procedia PDF Downloads 1104892 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work
Authors: Fawaz A. Binsarra, Halim Boussabaine
Abstract:
The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis
Procedia PDF Downloads 5274891 Community Empowerment: The Contribution of Network Urbanism on Urban Poverty Reduction
Authors: Lucia Antonela Mitidieri
Abstract:
This research analyzes the application of a model of settlements management based on networks of territorial integration that advocates planning as a cyclical and participatory process that engages early on with civic society, the private sector and the state. Through qualitative methods such as participant observation, interviews with snowball technique and an active research on territories, concrete results of community empowerment are obtained from the promotion of productive enterprises and community spaces of encounter and exchange. Studying the cultural and organizational dimensions of empowerment allows building indicators such as increase of capacities or community cohesion that can lead to support local governments in achieving sustainable urban development for a reduction of urban poverty.Keywords: community spaces, empowerment, network urbanism, participatory process
Procedia PDF Downloads 3314890 Finding Viable Pollution Routes in an Urban Network under a Predefined Cost
Authors: Dimitra Alexiou, Stefanos Katsavounis, Ria Kalfakakou
Abstract:
In an urban area the determination of transportation routes should be planned so as to minimize the provoked pollution taking into account the cost of such routes. In the sequel these routes are cited as pollution routes. The transportation network is expressed by a weighted graph G= (V, E, D, P) where every vertex represents a location to be served and E contains unordered pairs (edges) of elements in V that indicate a simple road. The distances/cost and a weight that depict the provoked air pollution by a vehicle transition at every road are assigned to each road as well. These are the items of set D and P respectively. Furthermore the investigated pollution routes must not exceed predefined corresponding values concerning the route cost and the route pollution level during the vehicle transition. In this paper we present an algorithm that generates such routes in order that the decision maker selects the most appropriate one.Keywords: bi-criteria, pollution, shortest paths, computation
Procedia PDF Downloads 3744889 Exploring Tree Growth Variables Influencing Carbon Sequestration in the Face of Climate Change
Authors: Funmilayo Sarah Eguakun, Peter Oluremi Adesoye
Abstract:
One of the major problems being faced by human society is that the global temperature is believed to be rising due to human activity that releases carbon IV oxide (CO2) to the atmosphere. Carbon IV oxide is the most important greenhouse gas influencing global warming and possible climate change. With climate change becoming alarming, reducing CO2 in our atmosphere has become a primary goal of international efforts. Forest landsare major sink and could absorb large quantities of carbon if the trees are judiciously managed. The study aims at estimating the carbon sequestration capacity of Pinus caribaea (pine)and Tectona grandis (Teak) under the prevailing environmental conditions and exploring tree growth variables that influencesthe carbon sequestration capacity in Omo Forest Reserve, Ogun State, Nigeria. Improving forest management by manipulating growth characteristics that influences carbon sequestration could be an adaptive strategy of forestry to climate change. Random sampling was used to select Temporary Sample Plots (TSPs) in the study area from where complete enumeration of growth variables was carried out within the plots. The data collected were subjected to descriptive and correlational analyses. The results showed that average carbon stored by Pine and Teak are 994.4±188.3 Kg and 1350.7±180.6 Kg respectively. The difference in carbon stored in the species is significant enough to consider choice of species relevant in climate change adaptation strategy. Tree growth variables influence the capacity of the tree to sequester carbon. Height, diameter, volume, wood density and age are positively correlated to carbon sequestration. These tree growth variables could be manipulated by the forest manager as an adaptive strategy for climate change while plantations of high wood density speciescould be relevant for management strategy to increase carbon storage.Keywords: adaptation, carbon sequestration, climate change, growth variables, wood density
Procedia PDF Downloads 3794888 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3384887 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example
Authors: Hongyun Li, Zhibin Jiang
Abstract:
The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern
Procedia PDF Downloads 84