Search results for: FFN (Feed-forward network)
2643 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 3222642 Improving Pneumatic Artificial Muscle Performance Using Surrogate Model: Roles of Operating Pressure and Tube Diameter
Authors: Van-Thanh Ho, Jaiyoung Ryu
Abstract:
In soft robotics, the optimization of fluid dynamics through pneumatic methods plays a pivotal role in enhancing operational efficiency and reducing energy loss. This is particularly crucial when replacing conventional techniques such as cable-driven electromechanical systems. The pneumatic model employed in this study represents a sophisticated framework designed to efficiently channel pressure from a high-pressure reservoir to various muscle locations on the robot's body. This intricate network involves a branching system of tubes. The study introduces a comprehensive pneumatic model, encompassing the components of a reservoir, tubes, and Pneumatically Actuated Muscles (PAM). The development of this model is rooted in the principles of shock tube theory. Notably, the study leverages experimental data to enhance the understanding of the interplay between the PAM structure and the surrounding fluid. This improved interactive approach involves the use of morphing motion, guided by a contraction function. The study's findings demonstrate a high degree of accuracy in predicting pressure distribution within the PAM. The model's predictive capabilities ensure that the error in comparison to experimental data remains below a threshold of 10%. Additionally, the research employs a machine learning model, specifically a surrogate model based on the Kriging method, to assess and quantify uncertainty factors related to the initial reservoir pressure and tube diameter. This comprehensive approach enhances our understanding of pneumatic soft robotics and its potential for improved operational efficiency.Keywords: pneumatic artificial muscles, pressure drop, morhing motion, branched network, surrogate model
Procedia PDF Downloads 982641 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 752640 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 2402639 Condition Assessment and Diagnosis for Aging Drinking Water Pipeline According to Scientific and Reasonable Methods
Authors: Dohwan Kim, Dongchoon Ryou, Pyungjong Yoo
Abstract:
In public water facilities, drinking water distribution systems have played an important role along with water purification systems. The water distribution network is one of the most expensive components of water supply infrastructure systems. To improve the reliability for the drinking rate of tap water, advanced water treatment processes such as granular activated carbon and membrane filtration were used by water service providers in Korea. But, distrust of the people for tap water are still. Therefore, accurate diagnosis and condition assessment for water pipelines are required to supply the clean water. The internal corrosion of water pipe has increased as time passed. Also, the cross-sectional areas in pipe are reduced by the rust, deposits and tubercles. It is the water supply ability decreases as the increase of hydraulic pump capacity is required to supply an amount of water, such as the initial condition. If not, the poor area of water supply will be occurred by the decrease of water pressure. In order to solve these problems, water managers and engineers should be always checked for the current status of the water pipe, such as water leakage and damage of pipe. If problems occur, it should be able to respond rapidly and make an accurate estimate. In Korea, replacement and rehabilitation of aging drinking water pipes are carried out based on the circumstances of simply buried years. So, water distribution system management may not consider the entire water pipeline network. The long-term design and upgrading of a water distribution network should address economic, social, environmental, health, hydraulic, and other technical issues. This is a multi-objective problem with a high level of complexity. In this study, the thickness of the old water pipes, corrosion levels of the inner and outer surface for water pipes, basic data research (i.e. pipe types, buried years, accident record, embedded environment, etc.), specific resistance of soil, ultimate tensile strength and elongation of metal pipes, samples characteristics, and chemical composition analysis were performed about aging drinking water pipes. Samples of water pipes used in this study were cement mortar lining ductile cast iron pipe (CML-DCIP, diameter 100mm) and epoxy lining steel pipe (diameter 65 and 50mm). Buried years of CML-DCIP and epoxy lining steel pipe were respectively 32 and 23 years. The area of embedded environment was marine reclamation zone since 1940’s. The result of this study was that CML-DCIP needed replacement and epoxy lining steel pipe was still useful.Keywords: drinking water distribution system, water supply, replacement, rehabilitation, water pipe
Procedia PDF Downloads 2582638 Design and Development of Fleet Management System for Multi-Agent Autonomous Surface Vessel
Authors: Zulkifli Zainal Abidin, Ahmad Shahril Mohd Ghani
Abstract:
Agent-based systems technology has been addressed as a new paradigm for conceptualizing, designing, and implementing software systems. Agents are sophisticated systems that act autonomously across open and distributed environments in solving problems. Nevertheless, it is impractical to rely on a single agent to do all computing processes in solving complex problems. An increasing number of applications lately require multiple agents to work together. A multi-agent system (MAS) is a loosely coupled network of agents that interact to solve problems that are beyond the individual capacities or knowledge of each problem solver. However, the network of MAS still requires a main system to govern or oversees the operation of the agents in order to achieve a unified goal. We had developed a fleet management system (FMS) in order to manage the fleet of agents, plan route for the agents, perform real-time data processing and analysis, and issue sets of general and specific instructions to the agents. This FMS should be able to perform real-time data processing, communicate with the autonomous surface vehicle (ASV) agents and generate bathymetric map according to the data received from each ASV unit. The first algorithm is developed to communicate with the ASV via radio communication using standard National Marine Electronics Association (NMEA) protocol sentences. Next, the second algorithm will take care of the path planning, formation and pattern generation is tested using various sample data. Lastly, the bathymetry map generation algorithm will make use of data collected by the agents to create bathymetry map in real-time. The outcome of this research is expected can be applied on various other multi-agent systems.Keywords: autonomous surface vehicle, fleet management system, multi agent system, bathymetry
Procedia PDF Downloads 2712637 Stochastic Multicast Routing Protocol for Flying Ad-Hoc Networks
Authors: Hyunsun Lee, Yi Zhu
Abstract:
Wireless ad-hoc network is a decentralized type of temporary machine-to-machine connection that is spontaneous or impromptu so that it does not rely on any fixed infrastructure and centralized administration. As unmanned aerial vehicles (UAVs), also called drones, have recently become more accessible and widely utilized in military and civilian domains such as surveillance, search and detection missions, traffic monitoring, remote filming, product delivery, to name a few. The communication between these UAVs become possible and materialized through Flying Ad-hoc Networks (FANETs). However, due to the high mobility of UAVs that may cause different types of transmission interference, it is vital to design robust routing protocols for FANETs. In this talk, the multicast routing method based on a modified stochastic branching process is proposed. The stochastic branching process is often used to describe an early stage of an infectious disease outbreak, and the reproductive number in the process is used to classify the outbreak into a major or minor outbreak. The reproductive number to regulate the local transmission rate is adapted and modified for flying ad-hoc network communication. The performance of the proposed routing method is compared with other well-known methods such as flooding method and gossip method based on three measures; average reachability, average node usage and average branching factor. The proposed routing method achieves average reachability very closer to flooding method, average node usage closer to gossip method, and outstanding average branching factor among methods. It can be concluded that the proposed multicast routing scheme is more efficient than well-known routing schemes such as flooding and gossip while it maintains high performance.Keywords: Flying Ad-hoc Networks, Multicast Routing, Stochastic Branching Process, Unmanned Aerial Vehicles
Procedia PDF Downloads 1232636 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)
Authors: Tesfaye Fenta Boka, Niu Zhendong
Abstract:
Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks
Procedia PDF Downloads 902635 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: authentication, key-session, security, wireless sensors
Procedia PDF Downloads 3182634 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, Hemantkumar B. Mehta
Abstract:
Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant
Procedia PDF Downloads 2922633 Soil Salinity Mapping using Electromagnetic Induction Measurements
Authors: Fethi Bouksila, Nessrine Zemni, Fairouz Slama, Magnus Persson, Ronny Berndasson, Akissa Bahri
Abstract:
Electromagnetic sensor EM 38 was used to predict and map soil salinity (ECe) in arid oasis. Despite the high spatial variation of soil moisture and shallow watertable, significant ECe-EM relationships were developed. The low drainage network efficiency is the main factor of soil salinizationKeywords: soil salinity map, electromagnetic induction, EM38, oasis, shallow watertable
Procedia PDF Downloads 1872632 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 3912631 Port Miami in the Caribbean and Mesoamerica: Data, Spatial Networks and Trends
Authors: Richard Grant, Landolf Rhode-Barbarigos, Shouraseni Sen Roy, Lucas Brittan, Change Li, Aiden Rowe
Abstract:
Ports are critical for the US economy, connecting farmers, manufacturers, retailers, consumers and an array of transport and storage operators. Port facilities vary widely in terms of their productivity, footprint, specializations, and governance. In this context, Port Miami is considered as one of the busiest ports providing both cargo and cruise services in connecting the wider region of the Caribbean and Mesoamerica to the global networks. It is considered as the “Cruise Capital of the World and Global Gateway of the Americas” and “leading container port in Florida.” Furthermore, it has also been ranked as one of the top container ports in the world and the second most efficient port in North America. In this regard, Port Miami has made significant investments in the strategic and capital infrastructure of about US$1 billion, including increasing the channel depth and other onshore infrastructural enhancements. Therefore, this study involves a detailed analysis of Port Miami’s network, using publicly available multiple years of data about marine vessel traffic, cargo, and connectivity and performance indices from 2015-2021. Through the analysis of cargo and cruise vessels to and from Port Miami and its relative performance at the global scale from 2015 to 2021, this study examines the port’s long-term resilience and future growth potential. The main results of the analyses indicate that the top category for both inbound and outbound cargo is manufactured products and textiles. In addition, there are a lot of fresh fruits, vegetables, and produce for inbound and processed food for outbound cargo. Furthermore, the top ten port connections for Port Miami are all located in the Caribbean region, the Gulf of Mexico, and the Southeast USA. About half of the inbound cargo comes from Savannah, Saint Thomas, and Puerto Plata, while outbound cargo is from Puerto Corte, Freeport, and Kingston. Additionally, for cruise vessels, a significantly large number of vessels originate from Nassau, followed by Freeport. The number of passenger's vessels pre-COVID was almost 1,000 per year, which dropped substantially in 2020 and 2021 to around 300 vessels. Finally, the resilience and competitiveness of Port Miami were also assessed in terms of its network connectivity by examining the inbound and outbound maritime vessel traffic. It is noteworthy that the most frequent port connections for Port Miami were Freeport and Savannah, followed by Kingston, Nassau, and New Orleans. However, several of these ports, Puerto Corte, Veracruz, Puerto Plata, and Santo Thomas, have low resilience and are highly vulnerable, which needs to be taken into consideration for the long-term resilience of Port Miami in the future.Keywords: port, Miami, network, cargo, cruise
Procedia PDF Downloads 792630 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management
Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li
Abstract:
Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification
Procedia PDF Downloads 2512629 Design and Development of an 'Optimisation Controller' and a SCADA Based Monitoring System for Renewable Energy Management in Telecom Towers
Authors: M. Sundaram, H. R. Sanath Kumar, A. Ramprakash
Abstract:
Energy saving is a key sustainability focus area for the Indian telecom industry today. This is especially true in rural India where energy consumption contributes to 70 % of the total network operating cost. In urban areas, the energy cost for network operation ranges between 15-30 %. This expenditure on energy as a result of the lack of grid power availability highlights a potential barrier to telecom industry growth. As a result of this, telecom tower companies switch to diesel generators, making them the second largest consumer of diesel in India, consuming over 2.5 billion litres per annum. The growing cost of energy due to increasing diesel prices and concerns over rising greenhouse emissions have caused these companies to look at other renewable energy options. Even the TRAI (Telecom Regulation Authority of India) has issued a number of guidelines to implement Renewable Energy Technologies (RETs) in the telecom towers as part of its ‘Implementation of Green Technologies in Telecom Sector’ initiative. Our proposal suggests the implementation of a Programmable Logic Controller (PLC) based ‘optimisation controller’ that can not only efficiently utilize the energy from RETs but also help to conserve the power used in the telecom towers. When there are multiple RETs available to supply energy, this controller will pick the optimum amount of energy from each RET based on the availability and feasibility at that point of time, reducing the dependence on diesel generators. For effective maintenance of the towers, we are planing to implement a SCADA based monitoring system along with the ‘optimization controller’.Keywords: operation costs, consumption of fuel and carbon footprint, implementation of a programmable logic controller (PLC) based ‘optimisation controller’, efficient SCADA based monitoring system
Procedia PDF Downloads 4192628 Roadmap to a Bottom-Up Approach Creating Meaningful Contributions to Surgery in Low-Income Settings
Authors: Eva Degraeuwe, Margo Vandenheede, Nicholas Rennie, Jolien Braem, Miryam Serry, Frederik Berrevoet, Piet Pattyn, Wouter Willaert, InciSioN Belgium Consortium
Abstract:
Background: Worldwide, five billion people lack access to safe and affordable surgical care. An added 1.27 million surgeons, anesthesiologists, and obstetricians (SAO) are needed by 2030 to meet the target of 20 per 100,000 population and to reach the goal of the Lancet Commission on Global Surgery. A well-informed future generation exposed early on to the current challenges in global surgery (GS) is necessary to ensure a sustainable future. Methods: InciSioN, the International Student Surgical Network, is a non-profit organization by and for students, residents, and fellows in over 80 countries. InciSioN Belgium, one of the prominent national working groups, has made a vast progression and collaborated with other networks to fill the educational gap, stimulate advocacy efforts and increase interactions with the international network. This report describes a roadmap to achieve sustainable development and education within GS, with the example of InciSioN Belgium. Results: Since the establishment of the organization’s branch in 2019, it has hosted an educational workshop for first-year residents in surgery, engaging over 2500 participants, and established a recurring directing board of 15 members. In the year 2020-2021, InciSioN Ghent has organized three workshops combining educational and interactive sessions for future prime advocates and surgical candidates. InciSioN Belgium has set up a strong formal coalition with the Belgian Medical Students’ Association (BeMSA), with its own standing committee, reaching over 3000+ medical students annually. In 2021-2022, InciSioN Belgium broadened to a multidisciplinary approach, including dentistry and nursing students and graduates within workshops and research projects, leading to a member and exposure increase of 450%. This roadmap sets strategic goals and mechanisms for the GS community to achieve nationwide sustained improvements in the research and education of GS focused on future SAOs, in order to achieve the GS sustainable development goals. In the coming year, expansion is directed to a formal integration of GS into the medical curriculum and increased international advocacy whilst inspiring SAOs to integrate into GS in Belgium. Conclusion: The development and implementation of durable change for GS are necessary. The student organization InciSioN Belgium is growing and hopes to close the colossal gap in GS and inspire the growth of other branches while sharing the know-how of a student organization.Keywords: advocacy, education, global surgery, InciSioN, student network
Procedia PDF Downloads 1742627 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: gendered grammar, misogynistic language, natural language processing, neural networks
Procedia PDF Downloads 1202626 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit
Authors: Khalil Ahmad Kakar
Abstract:
In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model
Procedia PDF Downloads 1832625 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 1442624 In silico Analysis towards Identification of Host-Microbe Interactions for Inflammatory Bowel Disease Linked to Reactive Arthritis
Authors: Anukriti Verma, Bhawna Rathi, Shivani Sharda
Abstract:
Reactive Arthritis (ReA) is a disorder that causes inflammation in joints due to certain infections at distant sites in the body. ReA begins with stiffness, pain, and inflammation in these areas especially the ankles, knees, and hips. It gradually causes several complications such as conjunctivitis in the eyes, skin lesions in hand, feet and nails and ulcers in the mouth. Nowadays the diagnosis of ReA is based upon a differential diagnosis pattern. The parameters for differentiating ReA from other similar disorders include physical examination, history of the patient and a high index of suspicion. There are no standard lab tests or markers available for ReA hence the early diagnosis of ReA becomes difficult and the chronicity of disease increases with time. It is reported that enteric disorders such as Inflammatory Bowel Disease (IBD) that is inflammation in gastrointestinal tract namely Crohn’s Disease (CD) and Ulcerative Colitis (UC) are reported to be linked with ReA. Several microorganisms are found such as Campylobacter, Salmonella, Shigella and Yersinia causing IBD leading to ReA. The aim of our study was to perform the in-silico analysis in order to find interactions between microorganisms and human host causing IBD leading to ReA. A systems biology approach for metabolic network reconstruction and simulation was used to find the essential genes of the reported microorganisms. Interactomics study was used to find the interactions between the pathogen genes and human host. Genes such as nhaA (pathogen), dpyD (human), nagK (human) and kynU (human) were obtained that were analysed further using the functional, pathway and network analysis. These genes can be used as putative drug targets and biomarkers in future for early diagnosis, prevention, and treatment of IBD leading to ReA.Keywords: drug targets, inflammatory bowel disease, reactive arthritis, systems biology
Procedia PDF Downloads 2752623 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1062622 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability
Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi
Abstract:
The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.Keywords: almost strictly positive real (ASPR), doubly-fed induction generator (DIFG), simple adaptive control (SAC), subsynchronous oscillations, wind turbine
Procedia PDF Downloads 3762621 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 7272620 Towards a Framework for Embedded Weight Comparison Algorithm with Business Intelligence in the Plantation Domain
Authors: M. Pushparani, A. Sagaya
Abstract:
Embedded systems have emerged as important elements in various domains with extensive applications in automotive, commercial, consumer, healthcare and transportation markets, as there is emphasis on intelligent devices. On the other hand, Business Intelligence (BI) has also been extensively used in a range of applications, especially in the agriculture domain which is the area of this research. The aim of this research is to create a framework for Embedded Weight Comparison Algorithm with Business Intelligence (EWCA-BI). The weight comparison algorithm will be embedded within the plantation management system and the weighbridge system. This algorithm will be used to estimate the weight at the site and will be compared with the actual weight at the plantation. The algorithm will be used to build the necessary alerts when there is a discrepancy in the weight, thus enabling better decision making. In the current practice, data are collected from various locations in various forms. It is a challenge to consolidate data to obtain timely and accurate information for effective decision making. Adding to this, the unstable network connection leads to difficulty in getting timely accurate information. To overcome the challenges embedding is done on a portable device that will have the embedded weight comparison algorithm to also assist in data capture and synchronize data at various locations overcoming the network short comings at collection points. The EWCA-BI will provide real-time information at any given point of time, thus enabling non-latent BI reports that will provide crucial information to enable efficient operational decision making. This research has a high potential in bringing embedded system into the agriculture industry. EWCA-BI will provide BI reports with accurate information with uncompromised data using an embedded system and provide alerts, therefore, enabling effective operation management decision-making at the site.Keywords: embedded business intelligence, weight comparison algorithm, oil palm plantation, embedded systems
Procedia PDF Downloads 2852619 An Integrated HCV Testing Model as a Method to Improve Identification and Linkage to Care in a Network of Community Health Centers in Philadelphia, PA
Authors: Catelyn Coyle, Helena Kwakwa
Abstract:
Objective: As novel and better tolerated therapies become available, effective HCV testing and care models become increasingly necessary to not only identify individuals with active infection but also link them to HCV providers for medical evaluation and treatment. Our aim is to describe an effective HCV testing and linkage to care model piloted in a network of five community health centers located in Philadelphia, PA. Methods: In October 2012, National Nursing Centers Consortium piloted a routine opt-out HCV testing model in a network of community health centers, one of which treats HCV, HIV, and co-infected patients. Key aspects of the model were medical assistant initiated testing, the use of laboratory-based reflex test technology, and electronic medical record modifications to prompt, track, report and facilitate payment of test costs. Universal testing on all adult patients was implemented at health centers serving patients at high-risk for HCV. The other sites integrated high-risk based testing, where patients meeting one or more of the CDC testing recommendation risk factors or had a history of homelessness were eligible for HCV testing. Mid-course adjustments included the integration of dual HIV testing, development of a linkage to care coordinator position to facilitate the transition of HIV and/or HCV-positive patients from primary to specialist care, and the transition to universal HCV testing across all testing sites. Results: From October 2012 to June 2015, the health centers performed 7,730 HCV tests and identified 886 (11.5%) patients with a positive HCV-antibody test. Of those with positive HCV-antibody tests, 838 (94.6%) had an HCV-RNA confirmatory test and 590 (70.4%) progressed to current HCV infection (overall prevalence=7.6%); 524 (88.8%) received their RNA-positive test result; 429 (72.7%) were referred to an HCV care specialist and 271 (45.9%) were seen by the HCV care specialist. The best linkage to care results were seen at the test and treat the site, where of the 333 patients were current HCV infection, 175 (52.6%) were seen by an HCV care specialist. Of the patients with active HCV infection, 349 (59.2%) were unaware of their HCV-positive status at the time of diagnosis. Since the integration of dual HCV/HIV testing in September 2013, 9,506 HIV tests were performed, 85 (0.9%) patients had positive HIV tests, 81 (95.3%) received their confirmed HIV test result and 77 (90.6%) were linked to HIV care. Dual HCV/HIV testing increased the number of HCV tests performed by 362 between the 9 months preceding dual testing and first 9 months after dual testing integration, representing a 23.7% increment. Conclusion: Our HCV testing model shows that integrated routine testing and linkage to care is feasible and improved detection and linkage to care in a primary care setting. We found that prevalence of current HCV infection was higher than that seen in locally in Philadelphia and nationwide. Intensive linkage services can increase the number of patients who successfully navigate the HCV treatment cascade. The linkage to care coordinator position is an important position that acts as a trusted intermediary for patients being linked to care.Keywords: HCV, routine testing, linkage to care, community health centers
Procedia PDF Downloads 3572618 Survey of Communication Technologies for IoT Deployments in Developing Regions
Authors: Namugenyi Ephrance Eunice, Julianne Sansa Otim, Marco Zennaro, Stephen D. Wolthusen
Abstract:
The Internet of Things (IoT) is a network of connected data processing devices, mechanical and digital machinery, items, animals, or people that may send data across a network without requiring human-to-human or human-to-computer interaction. Each component has sensors that can pick up on specific phenomena, as well as processing software and other technologies that can link to and communicate with other systems and/or devices over the Internet or other communication networks and exchange data with them. IoT is increasingly being used in fields other than consumer electronics, such as public safety, emergency response, industrial automation, autonomous vehicles, the Internet of Medical Things (IoMT), and general environmental monitoring. Consumer-based IoT applications, like smart home gadgets and wearables, are also becoming more prevalent. This paper presents the main IoT deployment areas for environmental monitoring in developing regions and the backhaul options suitable for them. A detailed review of each of the list of papers selected for the study is included in section III of this document. The study includes an overview of existing IoT deployments, the underlying communication architectures, protocols, and technologies that support them. This overview shows that Low Power Wireless Area Networks (LPWANs), as summarized in Table 1, are very well suited for monitoring environment architectures designed for remote locations. LoRa technology, particularly the LoRaWAN protocol, has an advantage over other technologies due to its low power consumption, adaptability, and suitable communication range. The prevailing challenges of the different architectures are discussed and summarized in Table 3 of the IV section, where the main problem is the obstruction of communication paths by buildings, trees, hills, etc.Keywords: communication technologies, environmental monitoring, Internet of Things, IoT deployment challenges
Procedia PDF Downloads 852617 Next-Generation Laser-Based Transponder and 3D Switch for Free Space Optics in Nanosatellite
Authors: Nadir Atayev, Mehman Hasanov
Abstract:
Future spacecraft will require a structural change in the way data is transmitted due to the increase in the volume of data required for space communication. Current radio frequency communication systems are already facing a bottleneck in the volume of data sent to the ground segment due to their technological and regulatory characteristics. To overcome these issues, free space optics communication plays an important role in the integrated terrestrial space network due to its advantages such as significantly improved data rate compared to traditional RF technology, low cost, improved security, and inter-satellite free space communication, as well as uses a laser beam, which is an optical signal carrier to establish satellite-ground & ground-to-satellite links. In this approach, there is a need for high-speed and energy-efficient systems as a base platform for sending high-volume video & audio data. Nano Satellite and its branch CubeSat platforms have more technical functionality than large satellites, wheres cover an important part of the space sector, with their Low-Earth-Orbit application area with low-cost design and technical functionality for building networks using different communication topologies. Along the research theme developed in this regard, the output parameter indicators for the FSO of the optical communication transceiver subsystem on the existing CubeSat platforms, and in the direction of improving the mentioned parameters of this communication methodology, 3D optical switch and laser beam controlled optical transponder with 2U CubeSat structural subsystems and application in the Low Earth Orbit satellite network topology, as well as its functional performance and structural parameters, has been studied accordingly.Keywords: cubesat, free space optics, nano satellite, optical laser communication.
Procedia PDF Downloads 892616 Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection
Authors: Hang Yang, Jichao Li, Kewei Yang, Tianyang Lei
Abstract:
Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis.Keywords: data mining, industrial system, multivariate time series, anomaly detection
Procedia PDF Downloads 152615 Some Results on Cluster Synchronization
Authors: Shahed Vahedi, Mohd Salmi Md Noorani
Abstract:
This paper investigates cluster synchronization phenomena between community networks. We focus on the situation where a variety of dynamics occur in the clusters. In particular, we show that different synchronization states simultaneously occur between the networks. The controller is designed having an adaptive control gain, and theoretical results are derived via Lyapunov stability. Simulations on well-known dynamical systems are provided to elucidate our results.Keywords: cluster synchronization, adaptive control, community network, simulation
Procedia PDF Downloads 4752614 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis
Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan
Abstract:
Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis
Procedia PDF Downloads 88