Search results for: periphery stakeholder network
2978 Prediction of Road Accidents in Qatar by 2022
Authors: M. Abou-Amouna, A. Radwan, L. Al-kuwari, A. Hammuda, K. Al-Khalifa
Abstract:
There is growing concern over increasing incidences of road accidents and consequent loss of human life in Qatar. In light to the future planned event in Qatar, World Cup 2022; Qatar should put into consideration the future deaths caused by road accidents, and past trends should be considered to give a reasonable picture of what may happen in the future. Qatar roads should be arranged and paved in a way that accommodate high capacity of the population in that time, since then there will be a huge number of visitors from the world. Qatar should also consider the risk issues of road accidents raised in that period, and plan to maintain high level to safety strategies. According to the increase in the number of road accidents in Qatar from 1995 until 2012, an analysis of elements affecting and causing road accidents will be effectively studied. This paper aims to identify and criticize the factors that have high effect on causing road accidents in the state of Qatar, and predict the total number of road accidents in Qatar 2022. Alternative methods are discussed and the most applicable ones according to the previous researches are selected for further studies. The methods that satisfy the existing case in Qatar were the multiple linear regression model (MLR) and artificial neutral network (ANN). Those methods are analyzed and their findings are compared. We conclude that by using MLR the number of accidents in 2022 will become 355,226 accidents, and by using ANN 216,264 accidents. We conclude that MLR gave better results than ANN because the artificial neutral network doesn’t fit data with large range varieties.Keywords: road safety, prediction, accident, model, Qatar
Procedia PDF Downloads 2582977 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa
Authors: Olumuyiwa Ojo, Masengo Ilunga
Abstract:
Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.Keywords: ANN, artificial neural network, wastewater treatment, model, development
Procedia PDF Downloads 1492976 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt
Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer
Abstract:
Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening
Procedia PDF Downloads 522975 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method
Authors: Lee Yan Nian
Abstract:
Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation
Procedia PDF Downloads 1232974 A Complex Network Approach to Structural Inequality of Educational Deprivation
Authors: Harvey Sanchez-Restrepo, Jorge Louca
Abstract:
Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics
Procedia PDF Downloads 1242973 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 1002972 Reservoir Inflow Prediction for Pump Station Using Upstream Sewer Depth Data
Authors: Osung Im, Neha Yadav, Eui Hoon Lee, Joong Hoon Kim
Abstract:
Artificial Neural Network (ANN) approach is commonly used in lots of fields for forecasting. In water resources engineering, forecast of water level or inflow of reservoir is useful for various kind of purposes. Due to advantages of ANN, many papers were written for inflow prediction in river networks, but in this study, ANN is used in urban sewer networks. The growth of severe rain storm in Korea has increased flood damage severely, and the precipitation distribution is getting more erratic. Therefore, effective pump operation in pump station is an essential task for the reduction in urban area. If real time inflow of pump station reservoir can be predicted, it is possible to operate pump effectively for reducing the flood damage. This study used ANN model for pump station reservoir inflow prediction using upstream sewer depth data. For this study, rainfall events, sewer depth, and inflow into Banpo pump station reservoir between years of 2013-2014 were considered. Feed – Forward Back Propagation (FFBF), Cascade – Forward Back Propagation (CFBP), Elman Back Propagation (EBP) and Nonlinear Autoregressive Exogenous (NARX) were used as ANN model for prediction. A comparison of results with ANN model suggests that ANN is a powerful tool for inflow prediction using the sewer depth data.Keywords: artificial neural network, forecasting, reservoir inflow, sewer depth
Procedia PDF Downloads 3172971 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 782970 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application
Authors: Jui-Chien Hsieh
Abstract:
Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network
Procedia PDF Downloads 1142969 The Happiness Pulse: A Measure of Individual Wellbeing at a City Scale, Development and Validation
Authors: Rosemary Hiscock, Clive Sabel, David Manley, Sam Wren-Lewis
Abstract:
As part of the Happy City Index Project, Happy City have developed a survey instrument to measure experienced wellbeing: how people are feeling and functioning in their everyday lives. The survey instrument, called the Happiness Pulse, was developed in partnership with the New Economics Foundation (NEF) with the dual aim of collecting citywide wellbeing data and engaging individuals and communities in the measurement and promotion of their own wellbeing. The survey domains and items were selected through a review of the academic literature and a stakeholder engagement process, including local policymakers, community organisations and individuals. The Happiness Pulse was included in the Bristol pilot of the Happy City Index (n=722). The experienced wellbeing items were subjected to factor analysis. A reduced number of items to be included in a revised scale for future data collection were again entered into a factor analysis. These revised factors were tested for reliability and validity. Among items to be included in a revised scale for future data collection three factors emerged: Be, Do and Connect. The Be factor had good reliability, convergent and criterion validity. The Do factor had good discriminant validity. The Connect factor had adequate reliability and good discriminant and criterion validity. Some age, gender and socioeconomic differentiation was found. The properties of a new scale to measure experienced wellbeing, intended for use by municipal authorities, are described. Happiness Pulse data can be combined with local data on wellbeing conditions to determine what matters for peoples wellbeing across a city and why.Keywords: city wellbeing , community wellbeing, engaging individuals and communities, measuring wellbeing and happiness
Procedia PDF Downloads 2612968 A Knowledge-Based Development of Risk Management Approaches for Construction Projects
Authors: Masoud Ghahvechi Pour
Abstract:
Risk management is a systematic and regular process of identifying, analyzing and responding to risks throughout the project's life cycle in order to achieve the optimal level of elimination, reduction or control of risk. The purpose of project risk management is to increase the probability and effect of positive events and reduce the probability and effect of unpleasant events on the project. Risk management is one of the most fundamental parts of project management, so that unmanaged or untransmitted risks can be one of the primary factors of failure in a project. Effective risk management does not apply to risk regression, which is apparently the cheapest option of the activity. However, the main problem with this option is the economic sensitivity, because what is potentially profitable is by definition risky, and what does not pose a risk is economically interesting and does not bring tangible benefits. Therefore, in relation to the implemented project, effective risk management is finding a "middle ground" in its management, which includes, on the one hand, protection against risk from a negative direction by means of accurate identification and classification of risk, which leads to analysis And it becomes a comprehensive analysis. On the other hand, management using all mathematical and analytical tools should be based on checking the maximum benefits of these decisions. Detailed analysis, taking into account all aspects of the company, including stakeholder analysis, will allow us to add what will become tangible benefits for our project in the future to effective risk management. Identifying the risk of the project is based on the theory that which type of risk may affect the project, and also refers to specific parameters and estimating the probability of their occurrence in the project. These conditions can be divided into three groups: certainty, uncertainty, and risk, which in turn support three types of investment: risk preference, risk neutrality, specific risk deviation, and its measurement. The result of risk identification and project analysis is a list of events that indicate the cause and probability of an event, and a final assessment of its impact on the environment.Keywords: risk, management, knowledge, risk management
Procedia PDF Downloads 662967 A Novel Harmonic Compensation Algorithm for High Speed Drives
Authors: Lakdar Sadi-Haddad
Abstract:
The past few years study of very high speed electrical drives have seen a resurgence of interest. An inventory of the number of scientific papers and patents dealing with the subject makes it relevant. In fact democratization of magnetic bearing technology is at the origin of recent developments in high speed applications. These machines have as main advantage a much higher power density than the state of the art. Nevertheless particular attention should be paid to the design of the inverter as well as control and command. Surface mounted permanent magnet synchronous machine is the most appropriate technology to address high speed issues. However, it has the drawback of using a carbon sleeve to contain magnets that could tear because of the centrifugal forces generated in rotor periphery. Carbon fiber is well known for its mechanical properties but it has poor heat conduction. It results in a very bad evacuation of eddy current losses induce in the magnets by time and space stator harmonics. The three-phase inverter is the main harmonic source causing eddy currents in the magnets. In high speed applications such harmonics are harmful because on the one hand the characteristic impedance is very low and on the other hand the ratio between the switching frequency and that of the fundamental is much lower than that of the state of the art. To minimize the impact of these harmonics a first lever is to use strategy of modulation producing low harmonic distortion while the second is to introduce a sinus filter between the inverter and the machine to smooth voltage and current waveforms applied to the machine. Nevertheless, in very high speed machine the interaction of the processes mentioned above may introduce particular harmonics that can irreversibly damage the system: harmonics at the resonant frequency, harmonics at the shaft mode frequency, subharmonics etc. Some studies address these issues but treat these phenomena with separate solutions (specific strategy of modulation, active damping methods ...). The purpose of this paper is to present a complete new active harmonic compensation algorithm based on an improvement of the standard vector control as a global solution to all these issues. This presentation will be based on a complete theoretical analysis of the processes leading to the generation of such undesired harmonics. Then a state of the art of available solutions will be provided before developing the content of a new active harmonic compensation algorithm. The study will be completed by a validation study using simulations and practical case on a high speed machine.Keywords: active harmonic compensation, eddy current losses, high speed machine
Procedia PDF Downloads 3952966 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 1712965 Resilience of Infrastructure Networks: Maintenance of Bridges in Mountainous Environments
Authors: Lorenza Abbracciavento, Valerio De Biagi
Abstract:
Infrastructures are key elements to ensure the operational functionality of the transport system. The collapse of a single bridge or, equivalently, a tunnel can leads an entire motorway to be considered completely inaccessible. As a consequence, the paralysis of the communications network determines several important drawbacks for the community. Recent chronicle events have demonstrated that ensuring the functional continuity of the strategic infrastructures during and after a catastrophic event makes a significant difference in terms of life and economical losses. Moreover, it has been observed that RC structures located in mountain environments show a worst state of conservation compared to the same typology and aging structures located in temperate climates. Because of its morphology, in fact, the mountain environment is particularly exposed to severe collapse and deterioration phenomena, generally: natural hazards, e.g. rock falls, and meteorological hazards, e.g. freeze-thaw cycles or heavy snows. For these reasons, deep investigation on the characteristics of these processes becomes of fundamental importance to provide smart and sustainable solutions and make the infrastructure system more resilient. In this paper, the design of a monitoring system in mountainous environments is presented and analyzed in its parts. The method not only takes into account the peculiar climatic conditions, but it is integrated and interacts with the environment surrounding.Keywords: structural health monitoring, resilience of bridges, mountain infrastructures, infrastructural network, maintenance
Procedia PDF Downloads 772964 Optimizing University Administration in a Globalized World: Leveraging AI and ICT for Enhanced Governance and Sustainability in Higher Education
Authors: Ikechukwu Ogeze Ukeje, Chinyere Ori Elom, Chukwudum Collins Umoke
Abstract:
This study explores the challenges in the integration of Artificial Intelligence (AI) and Information and Communication Technology (ICT) practices in enhancing governance and sustainable solution modeling in higher education, focusing on Alex Ekwueme Federal University Ndufu-Alike (AE-FUNAI), Nigeria. In the context of a developing country like Nigeria, leveraging AI and ICT tools presents a unique opportunity to improve teaching, learning, administrative processes, and governance. The research aims to evaluate how AI and ICT technologies can contribute to sustainable educational practices, enhance decision-making processes, and improve engagement among key stakeholders: students, lecturers, and administrative staff. Students are involved to provide insights into their interactions with AI and ICT tools, particularly in learning and participation in governance. Lecturers’ perspectives will offer a view into how these technologies influence teaching, research, and curriculum development. Administrative staff will provide a crucial understanding of how AI and ICT tools can streamline operations, support data-driven governance, and enhance institutional efficiency. This study will use a mixed-method approach to collect both qualitative and quantitative data. The finding of this study is geared towards shaping the future of education in Nigeria and beyond by developing an Inclusive AI-governance Integration Framework (I-AIGiF) for enhanced performance in the system. Examining the roles of these stakeholder groups, this research could guide the development of policies for more effective AI and ICT integration, leading to sustainable educational innovation and governance.Keywords: university administration, AI, higher education governance, education sustainability, ICT challenges
Procedia PDF Downloads 212963 Translation Quality Assessment in Fansubbed English-Chinese Swearwords: A Corpus-Based Study of the Big Bang Theory
Authors: Qihang Jiang
Abstract:
Fansubbing, the combination of fan and subtitling, is one of the main branches of Audiovisual Translation (AVT) having kindled more and more interest of researchers into the AVT field in recent decades. In particular, the quality of so-called non-professional translation seems questionable due to the non-transparent qualification of subtitlers in a huge community network. This paper attempts to figure out how YYeTs aka 'ZiMuZu', the largest fansubbing group in China, translates swearwords from English to Chinese for its fans of the prevalent American sitcom The Big Bang Theory, taking cultural, social and political elements into account in the context of China. By building a bilingual corpus containing both the source and target texts, this paper found that most of the original swearwords were translated in a toned-down manner, probably due to Chinese audiences’ cultural and social network features as well as the strict censorship under the Chinese government. Additionally, House (2015)’s newly revised model of Translation Quality Assessment (TQA) was applied and examined. Results revealed that most of the subtitled swearwords achieved their pragmatic functions and exerted a communicative effect for audiences. In conclusion, this paper enriches the empirical research concerning House’s new TQA model, gives a full picture of the subtitling of swearwords in AVT field and provides a practical guide for the practitioners in their career of subtitling.Keywords: corpus-based approach, fansubbing, pragmatic functions, swearwords, translation quality assessment
Procedia PDF Downloads 1432962 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 1982961 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches
Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.
Abstract:
A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency
Procedia PDF Downloads 1472960 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1952959 Grey Wolf Optimization Technique for Predictive Analysis of Products in E-Commerce: An Adaptive Approach
Authors: Shital Suresh Borse, Vijayalaxmi Kadroli
Abstract:
E-commerce industries nowadays implement the latest AI, ML Techniques to improve their own performance and prediction accuracy. This helps to gain a huge profit from the online market. Ant Colony Optimization, Genetic algorithm, Particle Swarm Optimization, Neural Network & GWO help many e-commerce industries for up-gradation of their predictive performance. These algorithms are providing optimum results in various applications, such as stock price prediction, prediction of drug-target interaction & user ratings of similar products in e-commerce sites, etc. In this study, customer reviews will play an important role in prediction analysis. People showing much interest in buying a lot of services& products suggested by other customers. This ultimately increases net profit. In this work, a convolution neural network (CNN) is proposed which further is useful to optimize the prediction accuracy of an e-commerce website. This method shows that CNN is used to optimize hyperparameters of GWO algorithm using an appropriate coding scheme. Accurate model results are verified by comparing them to PSO results whose hyperparameters have been optimized by CNN in Amazon's customer review dataset. Here, experimental outcome proves that this proposed system using the GWO algorithm achieves superior execution in terms of accuracy, precision, recovery, etc. in prediction analysis compared to the existing systems.Keywords: prediction analysis, e-commerce, machine learning, grey wolf optimization, particle swarm optimization, CNN
Procedia PDF Downloads 1132958 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia
Authors: Tanima Chatterjee, Maitree Bhattacharyya
Abstract:
The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes
Procedia PDF Downloads 1292957 Upgrades for Hydric Supply in Water System Distribution: Use of the Bayesian Network and Technical Expedients
Authors: Elena Carcano, James Ball
Abstract:
This work details the strategies adopted by the Italian Water Utilities during the distribution of water in emergency conditions which glide from earthquakes and droughts to floods and fires. Several water bureaus located over the national territory have been interviewed, and the collected information has been used in a database of potential interventions to be taken. The work discusses the actions adopted by water utilities. These are generally prioritized in order to minimize the social, temporal, and economic burden that the damaged and nearby areas need to support. Actions are defined relying on the Bayesian Network Approach, which constitutes the hard core of any decision support system. The Bayesian Networks give answers to interventions to real and most likely risky cases. The added value of this research consists in supplying the National Bureau, namely Protezione Civile, in charge of managing havoc and catastrophic situations with a univocal plot outline so as to be able to handle actions uniformly at the expense of different local laws or contradictory customs which squander any recovery conditions, proper technical service, and economic aids. The paper is organized as follows: in section 1, the introduction is stated; section 2 provides a brief discussion of BNNs (Bayesian Networks), section 3 introduces the adopted methodology; and in the last sections, results are presented, and conclusions are drawn.Keywords: hierarchical process, strategic plan, water emergency conditions, water supply
Procedia PDF Downloads 1602956 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1772955 Pedestrian Areas, Development Stimulus in Urban Old Fabrics; Analyzing Stroget, Pedestrian Street in Copenhagen
Authors: Kiomars Habibi, Mostafa Behzadfar, Airin Jaberi
Abstract:
Designing appropriate places for the comfort of pedestrians is one of the most important aspects of modern urbanization and renovation and rehabilitation stimulus of urban old fabrics. So, that special cities designed for pedestrians with a complete network of streets without cars, can be considered as one of the best habitations in the world. The number of these cities with a network of streets and squares in which beauty, enjoyment and comfort are mostly concerned for the pedestrians designed regions is increasing around the world, such as Stockholm, Copenhagen, Munich, Frankfurt, Venice, Rome, etc. In this paper, we are going to explain the influential factors regarding the efficiency of these cities by identifying one of the most important pedestrian ways of the world; Strøget is a car free zone in Copenhagen, Denmark. This popular tourist attraction in the center of town is the longest pedestrian shopping area in Europe. Analyses indicate that world-wide experience concerning the renovation and rehabilitation of old fabrics has many advantages in exploiting the idea of pedestrian way for regeneration of old fabrics. Transforming the streets to appropriate places for the comfort of pedestrians, expanding the public spaces such as city squares, and decreasing the masses of building alongside the brought comfort and peace is the main reason in the success of Strøget pedestrian street in urban old fabrics of Copenhagen. Hypothesis: The Strøget pedestrian street has been the development stimulus in Copenhagen and the urban old fabrics development as a resultKeywords: development, stimulus, pedestrian street, urban landscape, Stroget
Procedia PDF Downloads 1072954 Investigations of the Crude Oil Distillation Preheat Section in Unit 100 of Abadan Refinery and Its Recommendation
Authors: Mahdi GoharRokhi, Mohammad H. Ruhipour, Mohammad R. ZamaniZadeh, Mohsen Maleki, Yusef Shamsayi, Mahdi FarhaniNejad, Farzad FarrokhZadeh
Abstract:
Possessing massive resources of natural gas and petroleum, Iran has a special place among all other oil producing countries, according to international institutions of energy. In order to use these resources, development and functioning optimization of refineries and industrial units is mandatory. Heat exchanger is one of the most important and strategic equipment which its key role in the process of production is clear to everyone. For instance, if the temperature of a processing fluid is not set as needed by heat exchangers, the specifications of desired product can change profoundly. Crude oil enters a network of heat exchangers in atmospheric distillation section before getting into the distillation tower; in this case, well-functioning of heat exchangers can significantly affect the operation of distillation tower. In this paper, different scenarios for pre-heating of oil are studied using oil and gas simulation software, and the results are discussed. As we reviewed various scenarios, adding a heat exchanger to pre-heating network is proposed as the most efficient factor in improving all governing parameters of the tower i.e. temperature, pressure, and reflux rate. This exchanger is embedded in crude oil’s path. Crude oil enters the exchanger after E-101 and exchanges heat with discharging kerosene pump around from E-136. As depicted in the results, it will efficiently assist the improvement of process operation and side expenses.Keywords: atmospheric distillation unit, heat exchanger, preheat, simulation
Procedia PDF Downloads 6602953 Digital Twin for Retail Store Security
Authors: Rishi Agarwal
Abstract:
Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety
Procedia PDF Downloads 722952 Reducing Hazardous Materials Releases from Railroad Freights through Dynamic Trip Plan Policy
Authors: Omar A. Abuobidalla, Mingyuan Chen, Satyaveer S. Chauhan
Abstract:
Railroad transportation of hazardous materials freights is important to the North America economics that supports the national’s supply chain. This paper introduces various extensions of the dynamic hazardous materials trip plan problems. The problem captures most of the operational features of a real-world railroad transportations systems that dynamically initiates a set of blocks and assigns each shipment to a single block path or multiple block paths. The dynamic hazardous materials trip plan policies have distinguishing features that are integrating the blocking plan, and the block activation decisions. We also present a non-linear mixed integer programming formulation for each variant and present managerial insights based on a hypothetical railroad network. The computation results reveal that the dynamic car scheduling policies are not only able to take advantage of the capacity of the network but also capable of diminishing the population, and environment risks by rerouting the active blocks along the least risky train services without sacrificing the cost advantage of the railroad. The empirical results of this research illustrate that the issue of integrating the blocking plan, and the train makeup of the hazardous materials freights must receive closer attentions.Keywords: dynamic car scheduling, planning and scheduling hazardous materials freights, airborne hazardous materials, gaussian plume model, integrated blocking and routing plans, box model
Procedia PDF Downloads 2052951 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis
Procedia PDF Downloads 2242950 An Evaluative Microbiological Risk Assessment of Drinking Water Supply in the Carpathian Region: Identification of Occurrent Hazardous Bacteria with Quantitative Microbial Risk Assessment Method
Authors: Anikó Kaluzsa
Abstract:
The article's author aims to introduce and analyze those microbiological safety hazards which indicate the presence of secondary contamination in the water supply system. Since drinking water belongs to primary foods and is the basic condition of life, special attention should be paid on its quality. There are such indicators among the microbiological features can be found in water, which are clear evidence of the presence of water contamination, and based on this there is no need to perform other diagnostics, because they prove properly the contamination of the given water supply section. Laboratory analysis can help - both technologically and temporally – to identify contamination, but it does matter how long takes the removal and if the disinfection process takes place in time. The identification of the factors that often occur in the same places or the chance of their occurrence is greater than the average, facilitates our work. The pathogen microbiological risk assessment by the help of several features determines the most likely occurring microbiological features in the Carpathian basin. From among all the microbiological indicators, that are recommended targets for routine inspection by the World Health Organization, there is a paramount importance of the appearance of Escherichia coli in the water network, as its presence indicates the potential ubietiy of enteric pathogens or other contaminants in the water network. In addition, the author presents the steps of microbiological risk assessment analyzing those pathogenic micro-organisms registered to be the most critical.Keywords: drinking water, E. coli, microbiological indicators, risk assessment, water safety plan
Procedia PDF Downloads 3332949 Duality of Leagility and Governance: A New Normal Demand Network Management Paradigm under Pandemic
Authors: Jacky Hau
Abstract:
The prevalence of emerging technologies disrupts various industries as well as consumer behavior. Data collection has been in the fingertip and inherited through enabled Internet-of-things (IOT) devices. Big data analytics (BDA) becomes possible and allows real-time demand network management (DNM) through leagile supply chain. To enhance further on its resilience and predictability, governance is going to be examined to promote supply chain transparency and trust in an efficient manner. Leagility combines lean thinking and agile techniques in supply chain management. It aims at reducing costs and waste, as well as maintaining responsiveness to any volatile consumer demand by means of adjusting the decoupling point where the product flow changes from push to pull. Leagility would only be successful when collaborative planning, forecasting, and replenishment (CPFR) process or alike is in place throughout the supply chain business entities. Governance and procurement of the supply chain, however, is crucial and challenging for the execution of CPFR as every entity has to walk-the-talk generously for the sake of overall benefits of supply chain performance, not to mention the complexity of exercising the polices at both of within across various supply chain business entities on account of organizational behavior and mutual trust. Empirical survey results showed that the effective timespan on demand forecasting had been drastically shortening in the magnitude of months to weeks planning horizon, thus agility shall come first and preferably following by lean approach in a timely manner.Keywords: governance, leagility, procure-to-pay, source-to-contract
Procedia PDF Downloads 111