Search results for: RBF neural network modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6791

Search results for: RBF neural network modelling

4331 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 348
4330 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 121
4329 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: 6D posture estimation, image recognition, deep learning, AlexNet

Procedia PDF Downloads 144
4328 Strategic Planning in South African Higher Education

Authors: Noxolo Mafu

Abstract:

This study presents an overview of strategic planning in South African higher education institutions by tracing its trends and mystique in order to identify its impact. Over the democratic decades, strategic planning has become integral to institutional survival. It has been used as a potent tool by several institutions to catch up and surpass counterparts. While planning has always been part of higher education, strategic planning should be considered different. Strategic planning is primarily about development and maintenance of a strategic fitting between an institution and its dynamic opportunities. This presupposes existence of sets of stages that institutions pursue of which, can be regarded for assessment of the impact of strategic planning in an institution. The network theory serves guides the study in demystifying apparent organisational networks in strategic planning processes.

Keywords: network theory, strategy, planning, strategic planning, assessment, impact

Procedia PDF Downloads 545
4327 The Effects of Street Network Layout on Walking to School

Authors: Ayse Ozbil, Gorsev Argin, Demet Yesiltepe

Abstract:

Data for this cross-sectional study were drawn from questionnaires conducted in 10 elementary schools (1000 students, ages 12-14) located in Istanbul, Turkey. School environments (1600 meter buffers around the school) were evaluated through GIS-based land-use data (parcel level land use density) and street-level topography. Street networks within the same buffers were evaluated by using angular segment analysis (Integration and Choice) implemented in Depthmap as well as two segment-based connectivity measures, namely Metric and Directional Reach implemented in GIS. Segment Angular Integration measures how accessible each space from all the others within the radius using the least angle measure of distance. Segment Angular Choice which measures how many times a space is selected on journeys between all pairs of origins and destinations. Metric Reach captures the density of streets and street connections accessible from each individual road segment. Directional Reach measures the extent to which the entire street network is accessible with few direction changes. In addition, socio-economic characteristics (annual income, car ownership, education-level) of parents, obtained from parental questionnaires, were also included in the analysis. It is shown that surrounding street network configuration is strongly associated with both walk-mode shares and average walking distances to/from schools when controlling for parental socio-demographic attributes as well as land-use compositions and topographic features in school environments. More specifically, findings suggest that the scale at which urban form has an impact on pedestrian travel is considerably larger than a few blocks around the school.

Keywords: Istanbul, street network layout, urban form, walking to/from school

Procedia PDF Downloads 397
4326 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed

Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan

Abstract:

Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.

Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling

Procedia PDF Downloads 371
4325 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 86
4324 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 60
4323 Dynamic Economic Load Dispatch Using Quadratic Programming: Application to Algerian Electrical Network

Authors: A. Graa, I. Ziane, F. Benhamida, S. Souag

Abstract:

This paper presents a comparative analysis study of an efficient and reliable quadratic programming (QP) to solve economic load dispatch (ELD) problem with considering transmission losses in a power system. The proposed QP method takes care of different unit and system constraints to find optimal solution. To validate the effectiveness of the proposed QP solution, simulations have been performed using Algerian test system. Results obtained with the QP method have been compared with other existing relevant approaches available in literatures. Experimental results show a proficiency of the QP method over other existing techniques in terms of robustness and its optimal search.

Keywords: economic dispatch, quadratic programming, Algerian network, dynamic load

Procedia PDF Downloads 554
4322 Conventional Four Steps Travel Demand Modeling for Kabul New City

Authors: Ahmad Mansoor Stanikzai, Yoshitaka Kajita

Abstract:

This research is a very essential towards transportation planning of Kabul New City. In this research, the travel demand of Kabul metropolitan area (Existing and Kabul New City) are evaluated for three different target years (2015, current, 2025, mid-term, 2040, long-term). The outcome of this study indicates that, though currently the vehicle volume is less the capacity of existing road networks, Kabul city is suffering from daily traffic congestions. This is mainly due to lack of transportation management, the absence of proper policies, improper public transportation system and violation of traffic rules and regulations by inhabitants. On the other hand, the observed result indicates that the current vehicle to capacity ratio (VCR) which is the most used index to judge traffic status in the city is around 0.79. This indicates the inappropriate traffic condition of the city. Moreover, by the growth of population in mid-term (2025) and long-term (2040) and in the case of no development in the road network and transportation system, the VCR value will dramatically increase to 1.40 (2025) and 2.5 (2040). This can be a critical situation for an urban area from an urban transportation perspective. Thus, by introducing high-capacity public transportation system and the development of road network in Kabul New City and integrating these links with the existing city road network, significant improvements were observed in the value of VCR.

Keywords: Afghanistan, Kabul new city, planning, policy, urban transportation

Procedia PDF Downloads 322
4321 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX Through Fusion of Vision and 3+1D Millimeter Wave Radar

Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma

Abstract:

Unmanned Surface Vehicles (USVs) are valuable due to their ability to perform dangerous and time-consuming tasks on the water. Object detection tasks are significant in these applications. However, inherent challenges, such as the complex distribution of obstacles, reflections from shore structures, water surface fog, etc., hinder the performance of object detection of USVs. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. MMW radar is complementary to vision sensors, providing robust environmental information. The radar 3D point cloud is transferred to 2D radar pseudo image to unify radar and vision information format by utilizing the point transformer. We propose a multi-source object detection network (RV-YOLOX )based on radar-vision fusion for inland waterways environment. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.

Keywords: inland waterways, YOLO, sensor fusion, self-attention

Procedia PDF Downloads 85
4320 A Real-Time Snore Detector Using Neural Networks and Selected Sound Features

Authors: Stelios A. Mitilineos, Nicolas-Alexander Tatlas, Georgia Korompili, Lampros Kokkalas, Stelios M. Potirakis

Abstract:

Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS) is a widespread chronic disease that mostly remains undetected, mainly due to the fact that it is diagnosed via polysomnography which is a time and resource-intensive procedure. Screening the disease’s symptoms at home could be used as an alternative approach in order to alert individuals that potentially suffer from OSAHS without compromising their everyday routine. Since snoring is usually linked to OSAHS, developing a snore detector is appealing as an enabling technology for screening OSAHS at home using ubiquitous equipment like commodity microphones (included in, e.g., smartphones). In this context, this study developed a snore detection tool and herein present the approach and selection of specific sound features that discriminate snoring vs. environmental sounds, as well as the performance of the proposed tool. Furthermore, a Real-Time Snore Detector (RTSD) is built upon the snore detection tool and employed in whole-night sleep sound recordings resulting to a large dataset of snoring sound excerpts that are made freely available to the public. The RTSD may be used either as a stand-alone tool that offers insight to an individual’s sleep quality or as an independent component of OSAHS screening applications in future developments.

Keywords: obstructive sleep apnea hypopnea syndrome, apnea screening, snoring detection, machine learning, neural networks

Procedia PDF Downloads 195
4319 Impact of Climate Change on Sea Level Rise along the Coastline of Mumbai City, India

Authors: Chakraborty Sudipta, A. R. Kambekar, Sarma Arnab

Abstract:

Sea-level rise being one of the most important impacts of anthropogenic induced climate change resulting from global warming and melting of icebergs at Arctic and Antarctic, the investigations done by various researchers both on Indian Coast and elsewhere during the last decade has been reviewed in this paper. The paper aims to ascertain the propensity of consistency of different suggested methods to predict the near-accurate future sea level rise along the coast of Mumbai. Case studies at East Coast, Southern Tip and West and South West coast of India have been reviewed. Coastal Vulnerability Index of several important international places has been compared, which matched with Intergovernmental Panel on Climate Change forecasts. The application of Geographic Information System mapping, use of remote sensing technology, both Multi Spectral Scanner and Thematic Mapping data from Landsat classified through Iterative Self-Organizing Data Analysis Technique for arriving at high, moderate and low Coastal Vulnerability Index at various important coastal cities have been observed. Instead of data driven, hindcast based forecast for Significant Wave Height, additional impact of sea level rise has been suggested. Efficacy and limitations of numerical methods vis-à-vis Artificial Neural Network has been assessed, importance of Root Mean Square error on numerical results is mentioned. Comparing between various computerized methods on forecast results obtained from MIKE 21 has been opined to be more reliable than Delft 3D model.

Keywords: climate change, Coastal Vulnerability Index, global warming, sea level rise

Procedia PDF Downloads 124
4318 Secured Cancer Care and Cloud Services in Internet of Things /Wireless Sensor Network Based Medical Systems

Authors: Adeniyi Onasanya, Maher Elshakankiri

Abstract:

In recent years, the Internet of Things (IoT) has constituted a driving force of modern technological advancement, and it has become increasingly common as its impacts are seen in a variety of application domains, including healthcare. IoT is characterized by the interconnectivity of smart sensors, objects, devices, data, and applications. With the unprecedented use of IoT in industrial, commercial and domestic, it becomes very imperative to harness the benefits and functionalities associated with the IoT technology in (re)assessing the provision and positioning of healthcare to ensure efficient and improved healthcare delivery. In this research, we are focusing on two important services in healthcare systems, which are cancer care services and business analytics/cloud services. These services incorporate the implementation of an IoT that provides solution and framework for analyzing health data gathered from IoT through various sensor networks and other smart devices in order to improve healthcare delivery and to help health care providers in their decision-making process for enhanced and efficient cancer treatment. In addition, we discuss the wireless sensor network (WSN), WSN routing and data transmission in the healthcare environment. Finally, some operational challenges and security issues with IoT-based healthcare system are discussed.

Keywords: IoT, smart health care system, business analytics, (wireless) sensor network, cancer care services, cloud services

Procedia PDF Downloads 165
4317 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall

Procedia PDF Downloads 267
4316 Towards Law Data Labelling Using Topic Modelling

Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran

Abstract:

The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.

Keywords: courts of accounts, data labelling, document similarity, topic modeling

Procedia PDF Downloads 163
4315 Classification of Generative Adversarial Network Generated Multivariate Time Series Data Featuring Transformer-Based Deep Learning Architecture

Authors: Thrivikraman Aswathi, S. Advaith

Abstract:

As there can be cases where the use of real data is somehow limited, such as when it is hard to get access to a large volume of real data, we need to go for synthetic data generation. This produces high-quality synthetic data while maintaining the statistical properties of a specific dataset. In the present work, a generative adversarial network (GAN) is trained to produce multivariate time series (MTS) data since the MTS is now being gathered more often in various real-world systems. Furthermore, the GAN-generated MTS data is fed into a transformer-based deep learning architecture that carries out the data categorization into predefined classes. Further, the model is evaluated across various distinct domains by generating corresponding MTS data.

Keywords: GAN, transformer, classification, multivariate time series

Procedia PDF Downloads 115
4314 Using Dynamic Bayesian Networks to Characterize and Predict Job Placement

Authors: Xupin Zhang, Maria Caterina Bramati, Enrest Fokoue

Abstract:

Understanding the career placement of graduates from the university is crucial for both the qualities of education and ultimate satisfaction of students. In this research, we adapt the capabilities of dynamic Bayesian networks to characterize and predict students’ job placement using data from various universities. We also provide elements of the estimation of the indicator (score) of the strength of the network. The research focuses on overall findings as well as specific student groups including international and STEM students and their insight on the career path and what changes need to be made. The derived Bayesian network has the potential to be used as a tool for simulating the career path for students and ultimately helps universities in both academic advising and career counseling.

Keywords: dynamic bayesian networks, indicator estimation, job placement, social networks

Procedia PDF Downloads 360
4313 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI

Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal

Abstract:

Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.

Keywords: fMRI, functional connectivity, task-based, beta series correlation

Procedia PDF Downloads 261
4312 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 120
4311 Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks

Authors: Ali Shareef, Aliha Shareef, Yifeng Zhu

Abstract:

Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91% of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.

Keywords: wireless sensor network, Energy Efficient Routing

Procedia PDF Downloads 382
4310 Network and Sentiment Analysis of U.S. Congressional Tweets

Authors: Chaitanya Kanakamedala, Hansa Pradhan, Carter Gilbert

Abstract:

Social media platforms, such as Twitter, are excellent datasets for understanding human interactions and sentiments. This report explores social dynamics among US Congressional members through a network analysis applied to a dataset of tweets spanning 2008 to 2017 from the ’US Congressional Tweets Dataset’. In this report, we preform network analysis where connections between users (edges) are established based on a similarity threshold: two tweets are connected if the tweets they post are similar. By utilizing the Natural Language Toolkit (NLTK) and NetworkX, we quantified tweet similarity and constructed a graph comprising various interconnected components. Each component represents a cluster of users with closely aligned content. We then preform sentiment analysis on each cluster to explore the prevalent emotions and opinions within these groups. Our findings reveal that despite the initial expectation of distinct ideological divisions typically aligning with party lines, the analysis exposed a high degree of topical convergence across tweets from different political affiliations. The analysis preformed in this report not only highlights the potential of social media as a tool for political communication but also suggests a complex layer of interaction that transcends traditional partisan boundaries, reflecting a complicated landscape of politics in the digital age.

Keywords: natural language processing, sentiment analysis, centrality analysis, topic modeling

Procedia PDF Downloads 6
4309 Participatory Air Quality Monitoring in African Cities: Empowering Communities, Enhancing Accountability, and Ensuring Sustainable Environments

Authors: Wabinyai Fidel Raja, Gideon Lubisa

Abstract:

Air pollution is becoming a growing concern in Africa due to rapid industrialization and urbanization, leading to implications for public health and the environment. Establishing a comprehensive air quality monitoring network is crucial to combat this issue. However, conventional methods of monitoring are insufficient in African cities due to the high cost of setup and maintenance. To address this, low-cost sensors (LCS) can be deployed in various urban areas through the use of participatory air quality network siting (PAQNS). PAQNS involves stakeholders from the community, local government, and private sector working together to determine the most appropriate locations for air quality monitoring stations. This approach improves the accuracy and representativeness of air quality monitoring data, engages and empowers community members, and reflects the actual exposure of the population. Implementing PAQNS in African cities can build trust, promote accountability, and increase transparency in the air quality management process. However, challenges to implementing this approach must be addressed. Nonetheless, improving air quality is essential for protecting public health and promoting a sustainable environment. Implementing participatory and data-informed air quality monitoring can take a significant step toward achieving these important goals in African cities and beyond.

Keywords: low-cost sensors, participatory air quality network siting, air pollution, air quality management

Procedia PDF Downloads 78
4308 Exploration of an Environmentally Friendly Form of City Development Combined with a River: An Example of a Four-Dimensional Analysis Based on the Expansion of the City of Jinan across the Yellow River

Authors: Zhaocheng Shang

Abstract:

In order to study the topic of cities crossing rivers, a Four-Dimensional Analysis Method consisting of timeline, X-axis, Y-axis, and Z-axis is proposed. Policies, plans, and their implications are summarized and researched along with the timeline. The X-axis is the direction which is parallel to the river. The research area was chosen because of its important connection function. It is proposed that more surface water network should be built because of the ecological orientation of the research area. And the analysis of groundwater makes it for sure that the proposal is feasible. After the blue water network is settled, the green landscape network which is surrounded by it could be planned. The direction which is transversal to the river (Y-axis) should run through the transportation axis so that the urban texture could stretch in an ecological way. Therefore, it is suggested that the work of the planning bureau and river bureau should be coordinated. The Z-axis research is on the section view of the river, especially on the Yellow River’s special feature of being a perched river. Based on water control safety demands, river parks could be constructed on the embankment buffer zone, whereas many kinds of ornamental trees could be used to build the buffer zone. City Crossing River is a typical case where we make use of landscaping to build a symbiotic relationship between the urban landscape architecture and the environment. The local environment should be respected in the process of city expansion. The planning order of "Benefit- Flood Control Safety" should be replaced by "Flood Control Safety - Landscape Architecture- People - Benefit".

Keywords: blue-green landscape network, city crossing river, four-dimensional analysis method, planning order

Procedia PDF Downloads 149
4307 An Improved Cuckoo Search Algorithm for Voltage Stability Enhancement in Power Transmission Networks

Authors: Reza Sirjani, Nobosse Tafem Bolan

Abstract:

Many optimization techniques available in the literature have been developed in order to solve the problem of voltage stability enhancement in power systems. However, there are a number of drawbacks in the use of previous techniques aimed at determining the optimal location and size of reactive compensators in a network. In this paper, an Improved Cuckoo Search algorithm is applied as an appropriate optimization algorithm to determine the optimum location and size of a Static Var Compensator (SVC) in a transmission network. The main objectives are voltage stability improvement and total cost minimization. The results of the presented technique are then compared with other available optimization techniques.

Keywords: cuckoo search algorithm, optimization, power system, var compensators, voltage stability

Procedia PDF Downloads 536
4306 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling

Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta

Abstract:

In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.

Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity

Procedia PDF Downloads 112
4305 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 24
4304 Fight against Money Laundering with Optical Character Recognition

Authors: Saikiran Subbagari, Avinash Malladhi

Abstract:

Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.

Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition

Procedia PDF Downloads 129
4303 Exploring De-Fi through 3 Case Studies: Transparency, Social Impact, and Regulation

Authors: Dhaksha Vivekanandan

Abstract:

DeFi is a network that avoids reliance on financial intermediaries through its peer-to-peer financial network. DeFi operates outside of government control; hence it is important for us to understand its impacts. This study employs a literature review to understand DeFi and its emergence, as well as its implications on transparency, social impact, and regulation. Further, 3 case studies are analysed within the context of these categories. DeFi’s provision of increased transparency poses environmental and storage costs and can lead to user privacy being endangered. DeFi allows for the provision of entrepreneurial incentives and protection against monetary censorship and capital control. Despite DeFi's transparency issues and volatility costs, it has huge potential to reduce poverty; however, regulation surrounding DeFi still requires further tightening by governments.

Keywords: DeFi, transparency, regulation, social impact

Procedia PDF Downloads 72
4302 Neuroplasticity in Language Acquisition in English as Foreign Language Classrooms

Authors: Sabitha Rahim

Abstract:

In the context of teaching vocabulary of English as Foreign Language (EFL), the confluence of memory and retention is one of the most significant factors in students' language acquisition. The progress of students engaged in foreign language acquisition is often stymied by vocabulary attrition, which leads to learners' lack of confidence and motivation. However, among other factors, little research has investigated the importance of neuroplasticity in Foreign Language acquisition and how underused neural pathways lead to the loss of plasticity, thereby affecting the learners’ vocabulary retention and motivation. This research explored the effect of enhancing vocabulary acquisition of EFL students in the Foundation Year at King Abdulaziz University through various methods and neuroplasticity exercises that reinforced their attention, motivation, and engagement. It analyzed the results to determine if stimulating the brain of EFL learners by various physical and mental activities led to the improvement in short and long term memory in vocabulary retention. The main data collection methods were student surveys, assessment records of teachers, student achievement test results, and students' follow-up interviews. A key implication of this research is for the institutions to consider having multiple varieties of student activities promoting brain plasticity within the classrooms as an effective tool for foreign language acquisition. Building awareness among the faculty and adapting the curriculum to include activities that promote brain plasticity ensures an enhanced learning environment and effective language acquisition in EFL classrooms.

Keywords: language acquisition, neural paths, neuroplasticity, vocabulary attrition

Procedia PDF Downloads 159