Search results for: multi-entity bayesian networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3054

Search results for: multi-entity bayesian networks

624 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms

Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann

Abstract:

Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.

Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI

Procedia PDF Downloads 179
623 Sharing Tacit Knowledge: The Essence of Knowledge Management

Authors: Ayesha Khatun

Abstract:

In 21st century where markets are unstable, technologies rapidly proliferate, competitors multiply, products and services become obsolete almost overnight and customers demand low cost high value product, leveraging and harnessing knowledge is not just a potential source of competitive advantage rather a necessity in technology based and information intensive industries. Knowledge management focuses on leveraging the available knowledge and sharing the same among the individuals in the organization so that the employees can make best use of it towards achieving the organizational goals. Knowledge is not a discrete object. It is embedded in people and so difficult to transfer outside the immediate context that it becomes a major competitive advantage. However, internal transfer of knowledge among the employees is essential to maximize the use of knowledge available in the organization in an unstructured manner. But as knowledge is the source of competitive advantage for the organization it is also the source of competitive advantage for the individuals. People think that knowledge is power and sharing the same may lead to lose the competitive position. Moreover, the very nature of tacit knowledge poses many difficulties in sharing the same. But sharing tacit knowledge is the vital part of knowledge management process because it is the tacit knowledge which is inimitable. Knowledge management has been made synonymous with the use of software and technology leading to the management of explicit knowledge only ignoring personal interaction and forming of informal networks which are considered as the most successful means of sharing tacit knowledge. Factors responsible for effective sharing of tacit knowledge are grouped into –individual, organizational and technological factors. Different factors under each category have been identified. Creating a positive organizational culture, encouraging personal interaction, practicing reward system are some of the strategies that can help to overcome many of the barriers to effective sharing of tacit knowledge. Methodology applied here is completely secondary. Extensive review of relevant literature has been undertaken for the purpose.

Keywords: knowledge, tacit knowledge, knowledge management, sustainable competitive advantage, organization, knowledge sharing

Procedia PDF Downloads 398
622 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises

Procedia PDF Downloads 340
621 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data

Procedia PDF Downloads 334
620 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based on Li-Ion Battery and Solar Energy Supply

Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan

Abstract:

Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries. In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.

Keywords: ZigBee, Li-ion battery, solar panel, CC2530

Procedia PDF Downloads 374
619 The Strategies to Develop Post-Disaster Multi-Mode Transportation System from the Perspective of Traffic Resilience

Authors: Yuxiao Jiang, Lingjun Meng, Mengyu Zhan, Lichunyi Zhang, Yingxia Yun

Abstract:

On August 8th of 2015, a serious explosion occurred in Binhai New Area of Tianjin. This explosion led to the suspension of Tianjin-Binhai Light Rail Line 9 which was an important transportation mean connecting the old and new urban areas and the suspension causes inconvenience to commuters traveling from Tianjin to Binhai or Binhai to Tianjin and residents living by Line 9. On this regard, this paper intends to give suggestions on how to develop multi-mode transportation system rapidly and effectively after a disaster and tackle with the problems in terms of transportation infrastructure facilities. The paper proposes the idea of traffic resilience which refers to the city’s ability to restore its transportation system and reduce risks when the transportation system is destroyed by a disaster. By doing questionnaire research, on the spot study and collecting data from the internet, a GIS model is established so as to analyze the alternative traffic means used by different types of residents and study the transportation supply and demand. The result shows that along the Line 9, there is a larger demand for alternative traffic means in the place which is nearer to the downtown area. Also, the distribution of bus stations is more reasonable in the place nearer to downtown area, however, the traffic speed in the area is slower. Based on traffic resilience, the paper raises strategies to develop post-disaster multi-mode transportation system such as establishing traffic management mechanism timely and effectively, building multi-mode traffic networks, improving intelligent traffic systems and so on.

Keywords: traffic resilience, multi-mode transportation system, public traffic, transportation demand

Procedia PDF Downloads 344
618 Modified Silicates as Dissolved Oxygen Sensors in Water: Structural and Optical Properties

Authors: Andile Mkhohlakali, Tien-Chien Jen, James Tshilongo, Happy Mabowa

Abstract:

Among different parameters, oxygen is one of the most important analytes of interest, dissolved oxygen (DO) concentration is very crucial and significant for various areas of physical, chemical, and environmental monitoring. Herein we report oxygen-sensitive luminophores -based lanthanum(III) trifluoromethanesulfonate), [La]³⁺ was encapsulated into SiO₂-based xerogel matrix. The nanosensor is composed of organically modified silica nanoparticles, doped with the luminescent oxygen–sensitive lanthanum(III) trifluoromethanesulfonate complex. The precursor materials used for sensing film were triethyl ethoxy silane (TEOS) and (3-Mercaptopropyltriethoxysilane) (MPTMS- TEOS) used for SiO2-baed matrices. Brunauer–Emmett–Teller (BET), and BJH indicate that the SiO₂ transformed from microporous to mesoporous upon the addition of La³⁺ luminophore with increased surface area (SBET). The typical amorphous SiO₂ based xerogels were revealed with X-Ray diffraction (XRD) and Selected Area Electron Diffraction (SAED) analysis. Scanning electron microscope- (SEM) and transmission electron microscope (TEM) showed the porous morphology and reduced particle for SiO₂ and La-SiO₂ xerogels respectively. The existence of elements, siloxane networks, and thermal stability of xerogel was confirmed by energy dispersive spectroscopy (EDS), Fourier-transform infrared spectroscopy (FTIR), and Thermographic analysis (TGA). UV-Vis spectroscopy and photoluminescence (PL) have been used to characterize the optical properties of xerogels. La-SiO₂ demonstrates promising characteristic features of an active sensing film for dissolved oxygen in the water. Keywords: Sol-gel, ORMOSILs, encapsulation, Luminophores quenching, O₂-sensing

Keywords: sol-gel, ORMOSILs, luminophores quenching, O₂-sensing

Procedia PDF Downloads 122
617 A Corporate Social Responsibility Project to Improve the Democratization of Scientific Education in Brazil

Authors: Denise Levy

Abstract:

Nuclear technology is part of our everyday life and its beneficial applications help to improve the quality of our lives. Nevertheless, in Brazil, most often the media and social networks tend to associate radiation to nuclear weapons and major accidents, and there is still great misunderstanding about the peaceful applications of nuclear science. The Educational Portal Radioatividades (Radioactivities) is a corporate social responsibility initiative that takes advantage of the growing impact of Internet to offer high quality scientific information for teachers and students throughout Brazil. This web-based initiative focusses on the positive applications of nuclear technology, presenting the several contributions of ionizing radiation in different contexts, such as nuclear medicine, agriculture techniques, food safety and electric power generation, proving nuclear technology as part of modern life and a must to improve the quality of our lifestyle. This educational project aims to contribute for democratization of scientific education and social inclusion, approaching society to scientific knowledge, promoting critical thinking and inspiring further reflections. The website offers a wide variety of ludic activities such as curiosities, interactive exercises and short courses. Moreover, teachers are offered free web-based material with full instructions to be developed in class. Since year 2013, the project has been developed and improved according to a comprehensive study about the realistic scenario of ICTs infrastructure in Brazilian schools and in full compliance with the best e-learning national and international recommendations.

Keywords: information and communication technologies, nuclear technology, science communication, society and education

Procedia PDF Downloads 324
616 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network

Authors: Biruhi Tesfaye, Avinash M. Potdar

Abstract:

The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.

Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC

Procedia PDF Downloads 190
615 A Structured Mechanism for Identifying Political Influencers on Social Media Platforms: Top 10 Saudi Political Twitter Users

Authors: Ahmad Alsolami, Darren Mundy, Manuel Hernandez-Perez

Abstract:

Social media networks, such as Twitter, offer the perfect opportunity to either positively or negatively affect political attitudes on large audiences. The existence of influential users who have developed a reputation for their knowledge and experience of specific topics is a major factor contributing to this impact. Therefore, knowledge of the mechanisms to identify influential users on social media is vital for understanding their effect on their audience. The concept of the influential user is related to the concept of opinion leaders' to indicate that ideas first flow from mass media to opinion leaders and then to the rest of the population. Hence, the objective of this research was to provide reliable and accurate structural mechanisms to identify influential users, which could be applied to different platforms, places, and subjects. Twitter was selected as the platform of interest, and Saudi Arabia as the context for the investigation. These were selected because Saudi Arabia has a large number of Twitter users, some of whom are considerably active in setting agendas and disseminating ideas. The study considered the scientific methods that have been used to identify public opinion leaders before, utilizing metrics software on Twitter. The key findings propose multiple novel metrics to compare Twitter influencers, including the number of followers, social authority and the use of political hashtags, and four secondary filtering measures. Thus, using ratio and percentage calculations to classify the most influential users, Twitter accounts were filtered, analyzed and included. The structured approach is used as a mechanism to explore the top ten influencers on Twitter from the political domain in Saudi Arabia.

Keywords: Twitter, influencers, structured mechanism, Saudi Arabia

Procedia PDF Downloads 117
614 The Optimization of TICSI in the Convergence Mechanism of Urban Water Management

Authors: M. Macchiaroli, L. Dolores, V. Pellecchia

Abstract:

With the recent Resolution n. 580/2019/R/idr, the Italian Regulatory Authority for Energy, Networks, and Environment (ARERA) for the Urban Water Management has introduced, for water managements characterized by persistent critical issues regarding the planning and organization of the service and the implementation of the necessary interventions for the improvement of infrastructures and management quality, a new mechanism for determining tariffs: the regulatory scheme of Convergence. The aim of this regulatory scheme is the overcoming of the Water Service Divided in order to improve the stability of the local institutional structures, technical quality, contractual quality, as well as in order to guarantee transparency elements for Users of the Service. Convergence scheme presupposes the identification of the cost items to be considered in the tariff in parametric terms, distinguishing three possible cases according to the type of historical data available to the Manager. The study, in particular, focuses on operations that have neither data on tariff revenues nor data on operating costs. In this case, the Manager's Constraint on Revenues (VRG) is estimated on the basis of a reference benchmark and becomes the starting point for defining the structure of the tariff classes, in compliance with the TICSI provisions (Integrated Text for tariff classes, ARERA's Resolution n. 665/2017/R/idr). The proposed model implements the recent studies on optimization models for the definition of tariff classes in compliance with the constraints dictated by TICSI in the application of the Convergence mechanism, proposing itself as a support tool for the Managers and the local water regulatory Authority in the decision-making process.

Keywords: decision-making process, economic evaluation of projects, optimizing tools, urban water management, water tariff

Procedia PDF Downloads 118
613 Chaotic Electronic System with Lambda Diode

Authors: George Mahalu

Abstract:

The Chua diode has been configured over time in various ways, using electronic structures like as operational amplifiers (OAs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paper-work proposed here uses in the modeling a lambda diode type configuration consisting of two Junction Field Effect Transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.

Keywords: chaos, lambda diode, strange attractor, nonlinear system

Procedia PDF Downloads 85
612 Analyzing Environmental Emotive Triggers in Terrorist Propaganda

Authors: Travis Morris

Abstract:

The purpose of this study is to measure the intersection of environmental security entities in terrorist propaganda. To the best of author’s knowledge, this is the first study of its kind to examine this intersection within terrorist propaganda. Rosoka, natural language processing software and frame analysis are used to advance our understanding of how environmental frames function as emotive triggers. Violent jihadi demagogues use frames to suggest violent and non-violent solutions to their grievances. Emotive triggers are framed in a way to leverage individual and collective attitudes in psychological warfare. A comparative research design is used because of the differences and similarities that exist between two variants of violent jihadi propaganda that target western audiences. Analysis is based on salience and network text analysis, which generates violent jihadi semantic networks. Findings indicate that environmental frames are used as emotive triggers across both data sets, but also as tactical and information data points. A significant finding is that certain core environmental emotive triggers like “water,” “soil,” and “trees” are significantly salient at the aggregate level across both data sets. All environmental entities can be classified into two categories, symbolic and literal. Importantly, this research illustrates how demagogues use environmental emotive triggers in cyber space from a subcultural perspective to mobilize target audiences to their ideology and praxis. Understanding the anatomy of propaganda construction is necessary in order to generate effective counter narratives in information operations. This research advances an additional method to inform practitioners and policy makers of how environmental security and propaganda intersect.

Keywords: propaganda analysis, emotive triggers environmental security, frames

Procedia PDF Downloads 138
611 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 80
610 Impacts of Hydrologic and Topographic Changes on Water Regime Evolution of Poyang Lake, China

Authors: Feng Huang, Carlos G. Ochoa, Haitao Zhao

Abstract:

Poyang Lake, the largest freshwater lake in China, is located at the middle-lower reaches of the Yangtze River basin. It has great value in socioeconomic development and is internationally recognized as an important lacustrine and wetland ecosystem with abundant biodiversity. Impacted by ongoing climate change and anthropogenic activities, especially the regulation of the Three Gorges Reservoir since 2003, Poyang Lake has experienced significant water regime evolution, resulting in challenges for the management of water resources and the environment. Quantifying the contribution of hydrologic and topographic changes to water regime alteration is necessary for policymakers to design effective adaption strategies. Long term hydrologic data were collected and the back-propagation neural networks were constructed to simulate the lake water level. The impacts of hydrologic and topographic changes were differentiated through scenario analysis that considered pre-impact and post-impact hydrologic and topographic scenarios. The lake water regime was characterized by hydrologic indicators that describe monthly water level fluctuations, hydrologic features during flood and drought seasons, and frequency and rate of hydrologic variations. The results revealed different contributions of hydrologic and topographic changes to different features of the lake water regime.Noticeable changes were that the water level declined dramatically during the period of reservoir impoundment, and the drought was enhanced during the dry season. The hydrologic and topographic changes exerted a synergistic effect or antagonistic effect on different lake water regime features. The findings provide scientific reference for lacustrine and wetland ecological protection associated with water regime alterations.

Keywords: back-propagation neural network, scenario analysis, water regime, Poyang Lake

Procedia PDF Downloads 139
609 Ultra-Reliable Low Latency V2X Communication for Express Way Using Multiuser Scheduling Algorithm

Authors: Vaishali D. Khairnar

Abstract:

The main aim is to provide lower-latency and highly reliable communication facilities for vehicles in the automobile industry; vehicle-to-everything (V2X) communication basically intends to increase expressway road security and its effectiveness. The Ultra-Reliable Low-Latency Communications (URLLC) algorithm and cellular networks are applied in combination with Mobile Broadband (MBB). This is particularly used in express way safety-based driving applications. Expressway vehicle drivers (humans) will communicate in V2X systems using the sixth-generation (6G) communication systems which have very high-speed mobility features. As a result, we need to determine how to ensure reliable and consistent wireless communication links and improve the quality to increase channel gain, which is becoming a challenge that needs to be addressed. To overcome this challenge, we proposed a unique multi-user scheduling algorithm for ultra-massive multiple-input multiple-output (MIMO) systems using 6G. In wideband wireless network access in case of high traffic and also in medium traffic conditions, moreover offering quality-of-service (QoS) to distinct service groups with synchronized contemporaneous traffic on the highway like the Mumbai-Pune expressway becomes a critical problem. Opportunist MAC (OMAC) is a way of proposing communication across a wireless communication link that can change in space and time and might overcome the above-mentioned challenge. Therefore, a multi-user scheduling algorithm is proposed for MIMO systems using a cross-layered MAC protocol to achieve URLLC and high reliability in V2X communication.

Keywords: ultra-reliable low latency communications, vehicle-to-everything communication, multiple-input multiple-output systems, multi-user scheduling algorithm

Procedia PDF Downloads 87
608 The Survival of Bifidobacterium longum in Frozen Yoghurt Ice Cream and Its Properties Affected by Prebiotics (Galacto-Oligosaccharides and Fructo-Oligosaccharides) and Fat Content

Authors: S. Thaiudom, W. Toommuangpak

Abstract:

Yoghurt ice cream (YIC) containing prebiotics and probiotics seems to be much more recognized among consumers who concern for their health. Not only can it be a benefit on consumers’ health but also its taste and freshness provide people easily accept. However, the survival of such probiotic especially Bifidobacterium longum, found in human gastrointestinal tract and to be benefit to human gut, was still needed to study in the severe condition as whipping and freezing in ice cream process. Low and full-fat yoghurt ice cream containing 2 and 10% (w/w) fat content (LYIC and FYIC), respectively was produced by mixing 20% yoghurt containing B. longum into milk ice cream mix. Fructo-oligosaccharides (FOS) or galacto-oligosaccharides (GOS) at 0, 1, and 2% (w/w) were separately used as prebiotic in order to improve the survival of B. longum. Survival of this bacteria as a function of ice cream storage time and ice cream properties were investigated. The results showed that prebiotic; especially FOS could improve viable count of B. longum. The more concentration of prebiotic used, the more is the survival of B. Longum. These prebiotics could prolong the survival of B. longum up to 60 days, and the amount of survival number was still in the recommended level (106 cfu per gram). Fat content and prebiotic did not significantly affect the total acidity and the overrun of all samples, but an increase of fat content significantly increased the fat particle size which might be because of partial coalescence found in FYIC rather than in LYIC. However, addition of GOS or FOS could reduce the fat particle size, especially in FYIC. GOS seemed to reduce the hardness of YIC rather than FOS. High fat content (10% fat) significantly influenced on lowering the melting rate of YIC better than 2% fat content due to the 3-dimension networks of fat partial coalescence theoretically occurring more in FYIC than in LYIC. However, FOS seemed to retard the melting rate of ice cream better than GOS. In conclusion, GOS and FOS in YIC with different fat content can enhance the survival of B. longum and affect physical and chemical properties of such yoghurt ice cream.

Keywords: Bifidobacterium longum, prebiotic, survival, yoghurt ice cream

Procedia PDF Downloads 160
607 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach

Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana

Abstract:

This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.

Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation

Procedia PDF Downloads 186
606 Enhancing Project Performance Forecasting using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

Accurate forecasting of project performance metrics is crucial for successfully managing and delivering urban road reconstruction projects. Traditional methods often rely on static baseline plans and fail to consider the dynamic nature of project progress and external factors. This research proposes a machine learning-based approach to forecast project performance metrics, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category in an urban road reconstruction project. The proposed model utilizes time series forecasting techniques, including Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance based on historical data and project progress. The model also incorporates external factors, such as weather patterns and resource availability, as features to enhance the accuracy of forecasts. By applying the predictive power of machine learning, the performance forecasting model enables proactive identification of potential deviations from the baseline plan, which allows project managers to take timely corrective actions. The research aims to validate the effectiveness of the proposed approach using a case study of an urban road reconstruction project, comparing the model's forecasts with actual project performance data. The findings of this research contribute to the advancement of project management practices in the construction industry, offering a data-driven solution for improving project performance monitoring and control.

Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, earned value management

Procedia PDF Downloads 46
605 Factors Affecting At-Grade Railway Level Crossing Accidents in Bangladesh

Authors: Armana Huq

Abstract:

Railway networks have a significant role in the economy of any country. Similar to other transportation modes, many lives suffer from fatalities or injuries caused by accidents related to the railway. Railway accidents are not as common as roadway accidents yet they are more devastating and damaging than other roadway accidents. Despite that, issues related to railway accidents are not taken into consideration with significant attention as a major threat because of their less frequency compared to other accident categories perhaps. However, the Federal Railroad Administration reported nearly twelve thousand train accidents related to the railroad in the year 2014, resulting in more than eight hundred fatalities and thousands of injuries in the United States alone of which nearly one third fatalities resulted from railway crossing accidents. From an analysis of railway accident data of six years (2005-2010), it has been revealed that 344 numbers of the collision were occurred resulting 200 people dead and 443 people injured in Bangladesh. This paper includes a comprehensive overview of the railway safety situation in Bangladesh from 1998 to 2015. Each year on average, eight fatalities are reported in at-grade level crossings due to railway accidents in Bangladesh. In this paper, the number of railway accidents that occurred in Bangladesh has been presented and a fatality rate of 58.62% has been estimated as the percentage of total at-grade railway level crossing accidents. For this study, analysis of railway accidents in Bangladesh for the period 1998 to 2015 was obtained from the police reported accident database using MAAP (Microcomputer Accident Analysis Package). Investigation of the major contributing factors to the railway accidents has been performed using the Multinomial Logit model. Furthermore, hotspot analysis has been conducted using ArcGIS. Eventually, some suggestions have been provided to mitigate those accidents.

Keywords: safety, human factors, multinomial logit model, railway

Procedia PDF Downloads 147
604 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis

Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic

Abstract:

What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.

Keywords: political tendency, prediction, sentiment analysis, Twitter

Procedia PDF Downloads 237
603 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 106
602 Self-Supervised Attributed Graph Clustering with Dual Contrastive Loss Constraints

Authors: Lijuan Zhou, Mengqi Wu, Changyong Niu

Abstract:

Attributed graph clustering can utilize the graph topology and node attributes to uncover hidden community structures and patterns in complex networks, aiding in the understanding and analysis of complex systems. Utilizing contrastive learning for attributed graph clustering can effectively exploit meaningful implicit relationships between data. However, existing attributed graph clustering methods based on contrastive learning suffer from the following drawbacks: 1) Complex data augmentation increases computational cost, and inappropriate data augmentation may lead to semantic drift. 2) The selection of positive and negative samples neglects the intrinsic cluster structure learned from graph topology and node attributes. Therefore, this paper proposes a method called self-supervised Attributed Graph Clustering with Dual Contrastive Loss constraints (AGC-DCL). Firstly, Siamese Multilayer Perceptron (MLP) encoders are employed to generate two views separately to avoid complex data augmentation. Secondly, the neighborhood contrastive loss is introduced to constrain node representation using local topological structure while effectively embedding attribute information through attribute reconstruction. Additionally, clustering-oriented contrastive loss is applied to fully utilize clustering information in global semantics for discriminative node representations, regarding the cluster centers from two views as negative samples to fully leverage effective clustering information from different views. Comparative clustering results with existing attributed graph clustering algorithms on six datasets demonstrate the superiority of the proposed method.

Keywords: attributed graph clustering, contrastive learning, clustering-oriented, self-supervised learning

Procedia PDF Downloads 51
601 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel

Authors: F. M. Pisano, M. Ciminello

Abstract:

Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.

Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics

Procedia PDF Downloads 123
600 The Security Trade-Offs in Resource Constrained Nodes for IoT Application

Authors: Sultan Alharby, Nick Harris, Alex Weddell, Jeff Reeve

Abstract:

The concept of the Internet of Things (IoT) has received much attention over the last five years. It is predicted that the IoT will influence every aspect of our lifestyles in the near future. Wireless Sensor Networks are one of the key enablers of the operation of IoTs, allowing data to be collected from the surrounding environment. However, due to limited resources, nature of deployment and unattended operation, a WSN is vulnerable to various types of attack. Security is paramount for reliable and safe communication between IoT embedded devices, but it does, however, come at a cost to resources. Nodes are usually equipped with small batteries, which makes energy conservation crucial to IoT devices. Nevertheless, security cost in terms of energy consumption has not been studied sufficiently. Previous research has used a security specification of 802.15.4 for IoT applications, but the energy cost of each security level and the impact on quality of services (QoS) parameters remain unknown. This research focuses on the cost of security at the IoT media access control (MAC) layer. It begins by studying the energy consumption of IEEE 802.15.4 security levels, which is followed by an evaluation for the impact of security on data latency and throughput, and then presents the impact of transmission power on security overhead, and finally shows the effects of security on memory footprint. The results show that security overhead in terms of energy consumption with a payload of 24 bytes fluctuates between 31.5% at minimum level over non-secure packets and 60.4% at the top security level of 802.15.4 security specification. Also, it shows that security cost has less impact at longer packet lengths, and more with smaller packet size. In addition, the results depicts a significant impact on data latency and throughput. Overall, maximum authentication length decreases throughput by almost 53%, and encryption and authentication together by almost 62%.

Keywords: energy consumption, IEEE 802.15.4, IoT security, security cost evaluation

Procedia PDF Downloads 168
599 Retail of Organic Food in Poland

Authors: Joanna Smoluk-Sikorska, Władysława Łuczka

Abstract:

Organic farming is an important element of sustainable agriculture. It has been developing very dynamically in Poland, especially since Poland’s accession to the EU. Nevertheless, properly functioning organic market is a necessary condition justifying development of organic agriculture. Despite significant improvement, this market in Poland is still in the initial stage of growth. An important element of the market is distribution, especially retail, which offers specified product range to consumers. Therefore, there is a need to investigate retail outlets offering organic food in order to improve functioning of this part of the market. The inquiry research conducted in three types of outlets offering organic food, between 2011 and 2012 in the 8 largest Polish cities, shows that the majority of outlets offer cereals, processed fruit and vegetables as well as spices and the least shops – meat and sausages. The distributors mostly indicate unsatisfactory product range of suppliers as the reason for this situation. The main providers of the outlets are wholesalers, particularly in case of processed products, and in fresh products – organic farms. A very important distribution obstacle is dispersion of producers, which generates high transportation costs and what follows that, high price of organics. In the investigated shops, the most often used price calculation method is a cost method. The majority of the groceries and specialist shops apply margins between 21 and 40%. The margin in specialist outlets is the highest, in regard to the qualified service and advice. In turn, most retail networks declare the margin between 0 and 20%, which is consistent with low-price strategy applied in these shops. Some lacks in the product range of organics and in particular high prices cause that the demand volume is rather low. Therefore there is a need to support certain market actions, e.g. on-farm processing or promotion.

Keywords: organic food, retail, product range, supply sources

Procedia PDF Downloads 297
598 Cyber Security and Risk Assessment of the e-Banking Services

Authors: Aisha F. Bushager

Abstract:

Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.

Keywords: cyber security, e-banking, risk assessment, threats identification

Procedia PDF Downloads 349
597 Preserving Urban Cultural Heritage with Deep Learning: Color Planning for Japanese Merchant Towns

Authors: Dongqi Li, Yunjia Huang, Tomo Inoue, Kohei Inoue

Abstract:

With urbanization, urban cultural heritage is facing the impact and destruction of modernization and urbanization. Many historical areas are losing their historical information and regional cultural characteristics, so it is necessary to carry out systematic color planning for historical areas in conservation. As an early focus on urban color planning, Japan has a systematic approach to urban color planning. Hence, this paper selects five merchant towns from the category of important traditional building preservation areas in Japan as the subject of this study to explore the color structure and emotion of this type of historic area. First, the image semantic segmentation method identifies the buildings, roads, and landscape environments. Their color data were extracted for color composition and emotion analysis to summarize their common features. Second, the obtained Internet evaluations were extracted by natural language processing for keyword extraction. The correlation analysis of the color structure and keywords provides a valuable reference for conservation decisions for this historic area in the town. This paper also combines the color structure and Internet evaluation results with generative adversarial networks to generate predicted images of color structure improvements and color improvement schemes. The methods and conclusions of this paper can provide new ideas for the digital management of environmental colors in historic districts and provide a valuable reference for the inheritance of local traditional culture.

Keywords: historic districts, color planning, semantic segmentation, natural language processing

Procedia PDF Downloads 87
596 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing

Authors: Yuning Guan

Abstract:

Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.

Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district

Procedia PDF Downloads 42
595 Security Issues on Smart Grid and Blockchain-Based Secure Smart Energy Management Systems

Authors: Surah Aldakhl, Dafer Alali, Mohamed Zohdy

Abstract:

The next generation of electricity grid infrastructure, known as the "smart grid," integrates smart ICT (information and communication technology) into existing grids in order to alleviate the drawbacks of existing one-way grid systems. Future power systems' efficiency and dependability are anticipated to significantly increase thanks to the Smart Grid, especially given the desire for renewable energy sources. The security of the Smart Grid's cyber infrastructure is a growing concern, though, as a result of the interconnection of significant power plants through communication networks. Since cyber-attacks can destroy energy data, beginning with personal information leaking from grid members, they can result in serious incidents like huge outages and the destruction of power network infrastructure. We shall thus propose a secure smart energy management system based on the Blockchain as a remedy for this problem. The power transmission and distribution system may undergo a transformation as a result of the inclusion of optical fiber sensors and blockchain technology in smart grids. While optical fiber sensors allow real-time monitoring and management of electrical energy flow, Blockchain offers a secure platform to safeguard the smart grid against cyberattacks and unauthorized access. Additionally, this integration makes it possible to see how energy is produced, distributed, and used in real time, increasing transparency. This strategy has advantages in terms of improved security, efficiency, dependability, and flexibility in energy management. An in-depth analysis of the advantages and drawbacks of combining blockchain technology with optical fiber is provided in this paper.

Keywords: smart grids, blockchain, fiber optic sensor, security

Procedia PDF Downloads 118