Search results for: network distributed diagnosis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8310

Search results for: network distributed diagnosis

5220 The Postcognitivist Era in Cognitive Psychology

Authors: C. Jameke

Abstract:

During the cognitivist era in cognitive psychology, a theory of internal rules and symbolic representations was posited as an account of human cognition. This type of cognitive architecture had its heyday during the 1970s and 80s, but it has now been largely abandoned in favour of subsymbolic architectures (e.g. connectionism), non-representational frameworks (e.g. dynamical systems theory), and statistical approaches such as Bayesian theory. In this presentation I describe this changing landscape of research, and comment on the increasing influence of neuroscience on cognitive psychology. I then briefly review a few recent developments in connectionism, and neurocomputation relevant to cognitive psychology, and critically discuss the assumption made by some researchers in these frameworks that higher-level aspects of human cognition are simply emergent properties of massively large distributed neural networks

Keywords: connectionism, emergentism, postocgnitivist, representations, subsymbolic archiitecture

Procedia PDF Downloads 578
5219 A Novel Approach to Design and Implement Context Aware Mobile Phone

Authors: G. S. Thyagaraju, U. P. Kulkarni

Abstract:

Context-aware computing refers to a general class of computing systems that can sense their physical environment, and adapt their behaviour accordingly. Context aware computing makes systems aware of situations of interest, enhances services to users, automates systems and personalizes applications. Context-aware services have been introduced into mobile devices, such as PDA and mobile phones. In this paper we are presenting a novel approaches used to realize the context aware mobile. The context aware mobile phone (CAMP) proposed in this paper senses the users situation automatically and provides user context required services. The proposed system is developed by using artificial intelligence techniques like Bayesian Network, fuzzy logic and rough sets theory based decision table. Bayesian Network to classify the incoming call (high priority call, low priority call and unknown calls), fuzzy linguistic variables and membership degrees to define the context situations, the decision table based rules for service recommendation. To exemplify and demonstrate the effectiveness of the proposed methods, the context aware mobile phone is tested for college campus scenario including different locations like library, class room, meeting room, administrative building and college canteen.

Keywords: context aware mobile, fuzzy logic, decision table, Bayesian probability

Procedia PDF Downloads 365
5218 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 190
5217 Bacteriological Characterization of Drinking Water Distribution Network Biofilms by Gene Sequencing Using Different Pipe Materials

Authors: M. Zafar, S. Rasheed, Imran Hashmi

Abstract:

Very little is concerned about the bacterial contamination in drinking water biofilm which provide a potential source for bacteria to grow and increase rapidly. So as to understand the microbial density in DWDs, a three-month study was carried out. The aim of this study was to examine biofilm in three different pipe materials including PVC, PPR and GI. A set of all these pipe materials was installed in DWDs at nine different locations and assessed on monthly basis. Drinking water quality was evaluated by different parameters and characterization of biofilm. Among various parameters are Temperature, pH, turbidity, TDS, electrical conductivity, BOD, COD, total phosphates, total nitrates, total organic carbon (TOC) free chlorine and total chlorine, coliforms and spread plate counts (SPC) according to standard methods. Predominant species were Bacillus thuringiensis, Pseudomonas fluorescens , Staphylococcus haemolyticus, Bacillus safensis and significant increase in bacterial population was observed in PVC pipes while least in cement pipes. The quantity of DWDs bacteria was directly depended on biofilm bacteria and its increase was correlated with growth and detachment of bacteria from biofilms. Pipe material also affected the microbial community in drinking water distribution network biofilm while Similarity in bacterial species was observed between systems due to same disinfectant dose, time period and plumbing pipes.

Keywords: biofilm, DWDs, pipe material, bacterial population

Procedia PDF Downloads 347
5216 Gluten Intolerance, Celiac Disease, and Neuropsychiatric Disorders: A Translational Perspective

Authors: Jessica A. Hellings, Piyushkumar Jani

Abstract:

Background: Systemic autoimmune disorders are increasingly implicated in neuropsychiatric illness, especially in the setting of treatment resistance in individuals of all ages. Gluten allergy in fullest extent results in celiac disease, affecting multiple organs including central nervous system (CNS). Clinicians often lack awareness of the association between neuropsychiatric illness and gluten allergy, partly since many such research studies are published in immunology and gastroenterology journals. Methods: Following a Pubmed literature search and online searches on celiac disease websites, 40 articles are critically reviewed in detail. This work reviews celiac disease, gluten intolerance and current evidence of their relationship to neuropsychiatric and systemic illnesses. The review also covers current work-up and diagnosis, as well as dietary interventions, gluten restriction outcomes, and future research directions. Results: Gluten allergy in susceptible individuals damages the small intestine, producing a leaky gut and malabsorption state, as well as allowing antibodies into the bloodstream, which attack major organs. Lack of amino acid precursors for neurotransmitter synthesis together with antibody-associated brain changes and hypoperfusion may result in neuropsychiatric illness. This is well documented; however, studies in neuropsychiatry are often small. In the large CATIE trial, subjects with schizophrenia had significantly increased antibodies to tissue transglutaminase (TTG), and antigliadin antibodies, both significantly greater gluten antibodies than in control subjects. On later follow up, TTG-6 antibodies were identified in these subjects’ brains but not in their intestines. Significant evidence mostly from small studies also exists for gluten allergy and celiac-related depression, anxiety disorders, attention-deficit/hyperactivity disorder, autism spectrum disorders, ataxia, and epilepsy. Dietary restriction of gluten resulted in remission in several published cases, including for treatment-resistant schizophrenia. Conclusions: Ongoing and larger studies are needed of the diagnosis and treatment efficacy of the gluten-free diet in neuropsychiatric illness. Clinicians should ask about the patient history of anemia, hypothyroidism, irritable bowel syndrome and family history of benefit from the gluten-free diet, not limited to but especially in cases of treatment resistance. Obtaining gluten antibodies by a simple blood test, and referral for gastrointestinal work-up in positive cases should be considered.

Keywords: celiac, gluten, neuropsychiatric, translational

Procedia PDF Downloads 161
5215 Prediction of Music Track Popularity: A Machine Learning Approach

Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan

Abstract:

Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.

Keywords: classifier, machine learning, music tracks, popularity, prediction

Procedia PDF Downloads 663
5214 Finding a Redefinition of the Relationship between Rural and Urban Knowledge

Authors: Bianca Maria Rulli, Lenny Valentino Schiaretti

Abstract:

The considerable recent urbanization has increasingly sharpened environmental and social problems all over the world. During the recent years, many answers to the alarming attitudes in modern cities have emerged: a drastic reduction in the rate of growth is becoming essential for future generations and small scale economies are considered more adaptive and sustainable. According to the concept of degrowth, cities should consider surpassing the centralization of urban living by redefining the relationship between rural and urban knowledge; growing food in cities fundamentally contributes to the increase of social and ecological resilience. Through an innovative approach, this research combines the benefits of urban agriculture (increase of biological diversity, shorter and thus more efficient supply chains, food security) and temporary land use. They stimulate collaborative practices to satisfy the changing needs of communities and stakeholders. The concept proposes a coherent strategy to create a sustainable development of urban spaces, introducing a productive green-network to link specific areas in the city. By shifting the current relationship between architecture and landscape, the former process of ground consumption is deeply revised. Temporary modules can be used as concrete tools to create temporal areas of innovation, transforming vacant or marginal spaces into potential laboratories for the development of the city. The only permanent ground traces, such as foundations, are minimized in order to allow future land re-use. The aim is to describe a new mindset regarding the quality of space in the metropolis which allows, in a completely flexible way, to bring back the green and the urban farming into the cities. The wide possibilities of the research are analyzed in two different case-studies. The first is a regeneration/connection project designated for social housing, the second concerns the use of temporary modules to answer to the potential needs of social structures. The intention of the productive green-network is to link the different vacant spaces to each other as well as to the entire urban fabric. This also generates a potential improvement of the current situation of underprivileged and disadvantaged persons.

Keywords: degrowth, green network, land use, temporary building, urban farming

Procedia PDF Downloads 503
5213 Air Quality Forecast Based on Principal Component Analysis-Genetic Algorithm and Back Propagation Model

Authors: Bin Mu, Site Li, Shijin Yuan

Abstract:

Under the circumstance of environment deterioration, people are increasingly concerned about the quality of the environment, especially air quality. As a result, it is of great value to give accurate and timely forecast of AQI (air quality index). In order to simplify influencing factors of air quality in a city, and forecast the city’s AQI tomorrow, this study used MATLAB software and adopted the method of constructing a mathematic model of PCA-GABP to provide a solution. To be specific, this study firstly made principal component analysis (PCA) of influencing factors of AQI tomorrow including aspects of weather, industry waste gas and IAQI data today. Then, we used the back propagation neural network model (BP), which is optimized by genetic algorithm (GA), to give forecast of AQI tomorrow. In order to verify validity and accuracy of PCA-GABP model’s forecast capability. The study uses two statistical indices to evaluate AQI forecast results (normalized mean square error and fractional bias). Eventually, this study reduces mean square error by optimizing individual gene structure in genetic algorithm and adjusting the parameters of back propagation model. To conclude, the performance of the model to forecast AQI is comparatively convincing and the model is expected to take positive effect in AQI forecast in the future.

Keywords: AQI forecast, principal component analysis, genetic algorithm, back propagation neural network model

Procedia PDF Downloads 229
5212 Scope of Virtualization

Authors: Pavneet Kaur, Palak Sharma

Abstract:

Virtualization is a term that basically describe creation of virtual version of something like operating system, network, etc. Virtualization is a technology which is in use from 1970, but with new developments and inventions, it is now useful in education, software development etc. This paper will describe basic introduction of virtualization, along with its various categories. It will also describe use of virtualization in software engineering, its various benefits and shortcomings.

Keywords: virtualization, hardware, software, os

Procedia PDF Downloads 369
5211 The Application of Data Mining Technology in Building Energy Consumption Data Analysis

Authors: Liang Zhao, Jili Zhang, Chongquan Zhong

Abstract:

Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.

Keywords: data mining, data analysis, prediction, optimization, building operational performance

Procedia PDF Downloads 852
5210 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 113
5209 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 250
5208 Prediction of Road Accidents in Qatar by 2022

Authors: M. Abou-Amouna, A. Radwan, L. Al-kuwari, A. Hammuda, K. Al-Khalifa

Abstract:

There is growing concern over increasing incidences of road accidents and consequent loss of human life in Qatar. In light to the future planned event in Qatar, World Cup 2022; Qatar should put into consideration the future deaths caused by road accidents, and past trends should be considered to give a reasonable picture of what may happen in the future. Qatar roads should be arranged and paved in a way that accommodate high capacity of the population in that time, since then there will be a huge number of visitors from the world. Qatar should also consider the risk issues of road accidents raised in that period, and plan to maintain high level to safety strategies. According to the increase in the number of road accidents in Qatar from 1995 until 2012, an analysis of elements affecting and causing road accidents will be effectively studied. This paper aims to identify and criticize the factors that have high effect on causing road accidents in the state of Qatar, and predict the total number of road accidents in Qatar 2022. Alternative methods are discussed and the most applicable ones according to the previous researches are selected for further studies. The methods that satisfy the existing case in Qatar were the multiple linear regression model (MLR) and artificial neutral network (ANN). Those methods are analyzed and their findings are compared. We conclude that by using MLR the number of accidents in 2022 will become 355,226 accidents, and by using ANN 216,264 accidents. We conclude that MLR gave better results than ANN because the artificial neutral network doesn’t fit data with large range varieties.

Keywords: road safety, prediction, accident, model, Qatar

Procedia PDF Downloads 258
5207 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa

Authors: Olumuyiwa Ojo, Masengo Ilunga

Abstract:

Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.

Keywords: ANN, artificial neural network, wastewater treatment, model, development

Procedia PDF Downloads 149
5206 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt

Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer

Abstract:

Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.

Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening

Procedia PDF Downloads 52
5205 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 124
5204 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 123
5203 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 100
5202 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling

Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey

Abstract:

Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.

Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal

Procedia PDF Downloads 176
5201 Reservoir Inflow Prediction for Pump Station Using Upstream Sewer Depth Data

Authors: Osung Im, Neha Yadav, Eui Hoon Lee, Joong Hoon Kim

Abstract:

Artificial Neural Network (ANN) approach is commonly used in lots of fields for forecasting. In water resources engineering, forecast of water level or inflow of reservoir is useful for various kind of purposes. Due to advantages of ANN, many papers were written for inflow prediction in river networks, but in this study, ANN is used in urban sewer networks. The growth of severe rain storm in Korea has increased flood damage severely, and the precipitation distribution is getting more erratic. Therefore, effective pump operation in pump station is an essential task for the reduction in urban area. If real time inflow of pump station reservoir can be predicted, it is possible to operate pump effectively for reducing the flood damage. This study used ANN model for pump station reservoir inflow prediction using upstream sewer depth data. For this study, rainfall events, sewer depth, and inflow into Banpo pump station reservoir between years of 2013-2014 were considered. Feed – Forward Back Propagation (FFBF), Cascade – Forward Back Propagation (CFBP), Elman Back Propagation (EBP) and Nonlinear Autoregressive Exogenous (NARX) were used as ANN model for prediction. A comparison of results with ANN model suggests that ANN is a powerful tool for inflow prediction using the sewer depth data.

Keywords: artificial neural network, forecasting, reservoir inflow, sewer depth

Procedia PDF Downloads 317
5200 Differences in Patient Satisfaction Observed between Female Japanese Breast Cancer Patients Who Receive Breast-Conserving Surgery or Total Mastectomy

Authors: Keiko Yamauchi, Motoyuki Nakao, Yoko Ishihara

Abstract:

The increase in the number of women with breast cancer in Japan has required hospitals to provide a higher quality of medicine so that patients are satisfied with the treatment they receive. However, patients’ satisfaction following breast cancer treatment has not been sufficiently studied. Hence, we investigated the factors influencing patient satisfaction following breast cancer treatment among Japanese women. These women underwent either breast-conserving surgery (BCS) (n = 380) or total mastectomy (TM) (n = 247). In March 2016, we conducted a cross-sectional internet survey of Japanese women with breast cancer in Japan. We assessed the following factors: socioeconomic status, cancer-related information, the role of medical decision-making, the degree of satisfaction regarding the treatments received, and the regret arising from the medical decision-making processes. We performed logistic regression analyses with the following dependent variables: extreme satisfaction with the treatments received, and regret regarding the medical decision-making process. For both types of surgery, the odds ratio (OR) of being extremely satisfied with the cancer treatment was significantly higher among patients who did not have any regrets compared to patients who had. Also, the OR tended to be higher among patients who chose to play a wanted role in the medical decision-making process, compared with patients who did not. In the BCS group, the OR of being extremely satisfied with the treatment was higher if, at diagnosis, the patient’s youngest child was older than 19 years, compared with patients with no children. The OR was also higher if patient considered the stage and characteristics of their cancer significant. The OR of being extremely satisfied with the treatments was lower among patients who were not employed on full-time basis, and among patients who considered the second medical opinions and medical expenses to be significant. These associations were not observed in the TM group. The OR of having regrets regarding the medical decision-making process was higher among patients who chose to play a role in the decision-making process as they preferred, and was also higher in patients who were employed on either a part-time or contractual basis. For both types of surgery, the OR was higher among patients who considered a second medical opinion to be significant. Regardless of surgical type, regret regarding the medical decision-making process decreases treatment satisfaction. Patients who received breast-conserving surgery were more likely to have regrets concerning the medical decision-making process if they could not play a role in the process as they preferred. In addition, factors associated with the satisfaction with treatment in BCS group but not TM group included the second medical opinion, medical expenses, employment status, and age of the youngest child at diagnosis.

Keywords: medical decision making, breast-conserving surgery, total mastectomy, Japanese

Procedia PDF Downloads 147
5199 Pathway to Sustainable Shipping: Electric Ships

Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang

Abstract:

Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.

Keywords: cost reduction, electric ship, environmental protection, sustainable shipping

Procedia PDF Downloads 78
5198 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 114
5197 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 171
5196 Resilience of Infrastructure Networks: Maintenance of Bridges in Mountainous Environments

Authors: Lorenza Abbracciavento, Valerio De Biagi

Abstract:

Infrastructures are key elements to ensure the operational functionality of the transport system. The collapse of a single bridge or, equivalently, a tunnel can leads an entire motorway to be considered completely inaccessible. As a consequence, the paralysis of the communications network determines several important drawbacks for the community. Recent chronicle events have demonstrated that ensuring the functional continuity of the strategic infrastructures during and after a catastrophic event makes a significant difference in terms of life and economical losses. Moreover, it has been observed that RC structures located in mountain environments show a worst state of conservation compared to the same typology and aging structures located in temperate climates. Because of its morphology, in fact, the mountain environment is particularly exposed to severe collapse and deterioration phenomena, generally: natural hazards, e.g. rock falls, and meteorological hazards, e.g. freeze-thaw cycles or heavy snows. For these reasons, deep investigation on the characteristics of these processes becomes of fundamental importance to provide smart and sustainable solutions and make the infrastructure system more resilient. In this paper, the design of a monitoring system in mountainous environments is presented and analyzed in its parts. The method not only takes into account the peculiar climatic conditions, but it is integrated and interacts with the environment surrounding.

Keywords: structural health monitoring, resilience of bridges, mountain infrastructures, infrastructural network, maintenance

Procedia PDF Downloads 77
5195 Translation Quality Assessment in Fansubbed English-Chinese Swearwords: A Corpus-Based Study of the Big Bang Theory

Authors: Qihang Jiang

Abstract:

Fansubbing, the combination of fan and subtitling, is one of the main branches of Audiovisual Translation (AVT) having kindled more and more interest of researchers into the AVT field in recent decades. In particular, the quality of so-called non-professional translation seems questionable due to the non-transparent qualification of subtitlers in a huge community network. This paper attempts to figure out how YYeTs aka 'ZiMuZu', the largest fansubbing group in China, translates swearwords from English to Chinese for its fans of the prevalent American sitcom The Big Bang Theory, taking cultural, social and political elements into account in the context of China. By building a bilingual corpus containing both the source and target texts, this paper found that most of the original swearwords were translated in a toned-down manner, probably due to Chinese audiences’ cultural and social network features as well as the strict censorship under the Chinese government. Additionally, House (2015)’s newly revised model of Translation Quality Assessment (TQA) was applied and examined. Results revealed that most of the subtitled swearwords achieved their pragmatic functions and exerted a communicative effect for audiences. In conclusion, this paper enriches the empirical research concerning House’s new TQA model, gives a full picture of the subtitling of swearwords in AVT field and provides a practical guide for the practitioners in their career of subtitling.

Keywords: corpus-based approach, fansubbing, pragmatic functions, swearwords, translation quality assessment

Procedia PDF Downloads 142
5194 Estimation of Train Operation Using an Exponential Smoothing Method

Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono

Abstract:

The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.

Keywords: exponential smoothing method, open data, operation estimation, train schedule

Procedia PDF Downloads 388
5193 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 198
5192 Facilitating Familial Support of Saudi Arabians Living with HIV/AIDS

Authors: Noor Attar

Abstract:

This paper provides an overview of the current situation of HIV/AIDS patients in the Kingdom of Saudi Arabia (KSA) and a literature review of the concepts of stigma communication, communication of social support. These concepts provide the basis for the proposed methods, which will include conducting a textual analysis of materials that are currently distributed to family members of people living with HIV/AIDS (PLWHIV/A) in KSA and creating an educational brochure. The brochure will aim to help families of PLWHIV/A in KSA (1) understand how stigma shapes the experience of PLWHIV/A, (2) realize the role of positive communication as a helpful social support, and (3) develop the ability to provide positive social support for their loved ones.

Keywords: HIV/AIDS, Saudi Arabia, social support, stigma communication

Procedia PDF Downloads 285
5191 Public Spending and Economic Growth: An Empirical Analysis of Developed Countries

Authors: Bernur Acikgoz

Abstract:

The purpose of this paper is to investigate the effects of public spending on economic growth and examine the sources of economic growth in developed countries since the 1990s. This paper analyses whether public spending effect on economic growth based on Cobb-Douglas Production Function with the two econometric models with Autoregressive Distributed Lag (ARDL) and Dynamic Fixed Effect (DFE) for 21 developed countries (high-income OECD countries), over the period 1990-2013. Our models results are parallel to each other and the models support that public spending has an important role for economic growth. This result is accurate with theories and previous empirical studies.

Keywords: public spending, economic growth, panel data, ARDL models

Procedia PDF Downloads 370