Search results for: swarm intelligence.
17 Implementing an Intuitive Reasoner with a Large Weather Database
Authors: Yung-Chien Sun, O. Grant Clark
Abstract:
In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138316 GridNtru: High Performance PKCS
Authors: Narasimham Challa, Jayaram Pradhan
Abstract:
Cryptographic algorithms play a crucial role in the information society by providing protection from unauthorized access to sensitive data. It is clear that information technology will become increasingly pervasive, Hence we can expect the emergence of ubiquitous or pervasive computing, ambient intelligence. These new environments and applications will present new security challenges, and there is no doubt that cryptographic algorithms and protocols will form a part of the solution. The efficiency of a public key cryptosystem is mainly measured in computational overheads, key size and bandwidth. In particular the RSA algorithm is used in many applications for providing the security. Although the security of RSA is beyond doubt, the evolution in computing power has caused a growth in the necessary key length. The fact that most chips on smart cards can-t process key extending 1024 bit shows that there is need for alternative. NTRU is such an alternative and it is a collection of mathematical algorithm based on manipulating lists of very small integers and polynomials. This allows NTRU to high speeds with the use of minimal computing power. NTRU (Nth degree Truncated Polynomial Ring Unit) is the first secure public key cryptosystem not based on factorization or discrete logarithm problem. This means that given sufficient computational resources and time, an adversary, should not be able to break the key. The multi-party communication and requirement of optimal resource utilization necessitated the need for the present day demand of applications that need security enforcement technique .and can be enhanced with high-end computing. This has promoted us to develop high-performance NTRU schemes using approaches such as the use of high-end computing hardware. Peer-to-peer (P2P) or enterprise grids are proven as one of the approaches for developing high-end computing systems. By utilizing them one can improve the performance of NTRU through parallel execution. In this paper we propose and develop an application for NTRU using enterprise grid middleware called Alchemi. An analysis and comparison of its performance for various text files is presented.Keywords: Alchemi, GridNtru, Ntru, PKCS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 169215 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotiv EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.
Keywords: Brain Computer Interface (BCI), Electroencephalogram (EEG), EEGLab, BCILab, Emotiv, Emotions, Interval features, Spectral features, Artificial Neural Network, Control applications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 529714 A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening
Authors: Chun-Lang Chang
Abstract:
According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.
Keywords: Data mining, Hearing impairment, Logistic regression analysis, Support vector machines
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180113 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.
Keywords: Canny pruning, hand recognition, machine learning, skin tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130912 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market
Authors: Cristian Păuna
Abstract:
In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.
Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69611 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second
Authors: P. V. Pramila, V. Mahesh
Abstract:
Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients resulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects) with the aforementioned input features. It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, as well as yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.
Keywords: FEV1, Multivariate Adaptive Regression Splines Pulmonary Function Test, Random Forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 373810 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).
Keywords: Compliance Course, Corporate Training, Learner Behaviours, xAPI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5629 Interactive Garments: Flexible Technologies for Textile Integration
Authors: Anupam Bhatia
Abstract:
Upon reviewing the literature and the pragmatic work done in the field of E- textiles, it is observed that the applications of wearable technologies have found a steady growth in the field of military, medical, industrial, sports; whereas fashion is at a loss to know how to treat this technology and bring it to market. The purpose of this paper is to understand the practical issues of integration of electronics in garments; cutting patterns for mass production, maintaining the basic properties of textiles and daily maintenance of garments that hinder the wide adoption of interactive fabric technology within Fashion and leisure wear. To understand the practical hindrances an experimental and laboratory approach is taken. “Techno Meets Fashion” has been an interactive fashion project where sensor technologies have been embedded with textiles that result in set of ensembles that are light emitting garments, sound sensing garments, proximity garments, shape memory garments etc. Smart textiles, especially in the form of textile interfaces, are drastically underused in fashion and other lifestyle product design. Clothing and some other textile products must be washable, which subjects to the interactive elements to water and chemical immersion, physical stress, and extreme temperature. The current state of the art tends to be too fragile for this treatment. The process for mass producing traditional textiles becomes difficult in interactive textiles. As cutting patterns from larger rolls of cloth and sewing them together to make garments breaks and reforms electronic connections in an uncontrolled manner. Because of this, interactive fabric elements are integrated by hand into textiles produced by standard methods. The Arduino has surely made embedding electronics into textiles much easier than before; even then electronics are not integral to the daily wear garments. Soft and flexible interfaces of MEMS (micro sensors and Micro actuators) can be an option to make this possible by blending electronics within E-textiles in a way that’s seamless and still retains functions of the circuits as well as the garment. Smart clothes, which offer simultaneously a challenging design and utility value, can be only mass produced if the demands of the body are taken care of i.e. protection, anthropometry, ergonomics of human movement, thermo- physiological regulation.Keywords: Ambient Intelligence, Proximity Sensors, Shape Memory Materials, Sound sensing garments, Wearable Technology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32788 Italians- Social and Emotional Loneliness: The Results of Five Studies
Authors: Vanda Lucia Zammuner
Abstract:
Subjective loneliness describes people who feel a disagreeable or unacceptable lack of meaningful social relationships, both at the quantitative and qualitative level. The studies to be presented tested an Italian 18-items self-report loneliness measure, that included items adapted from scales previously developed, namely a short version of the UCLA (Russell, Peplau and Cutrona, 1980), and the 11-items Loneliness scale by De Jong-Gierveld & Kamphuis (JGLS; 1985). The studies aimed at testing the developed scale and at verifying whether loneliness is better conceptualized as a unidimensional (so-called 'general loneliness') or a bidimensional construct, namely comprising the distinct facets of social and emotional loneliness. The loneliness questionnaire included 2 singleitem criterion measures of sad mood, and social contact, and asked participants to supply information on a number of socio-demographic variables. Factorial analyses of responses obtained in two preliminary studies, with 59 and 143 Italian participants respectively, showed good factor loadings and subscale reliability and confirmed that perceived loneliness has clearly two components, a social and an emotional one, the latter measured by two subscales, a 7-item 'general' loneliness subscale derived from UCLA, and a 6–item 'emotional' scale included in the JGLS. Results further showed that type and amount of loneliness are related, negatively, to frequency of social contacts, and, positively, to sad mood. In a third study data were obtained from a nation-wide sample of 9.097 Italian subjects, 12 to about 70 year-olds, who filled the test on-line, on the Italian web site of a large-audience magazine, Focus. The results again confirmed the reliability of the component subscales, namely social, emotional, and 'general' loneliness, and showed that they were highly correlated with each other, especially the latter two. Loneliness scores were significantly predicted by sex, age, education level, sad mood and social contact, and, less so, by other variables – e.g., geographical area and profession. The scale validity was confirmed by the results of a fourth study, with elderly men and women (N 105) living at home or in residential care units. The three subscales were significantly related, among others, to depression, and to various measures of the extension of, and satisfaction with, social contacts with relatives and friends. Finally, a fifth study with 315 career-starters showed that social and emotional loneliness correlate with life satisfaction, and with measures of emotional intelligence. Altogether the results showed a good validity and reliability in the tested samples of the entire scale, and of its components.Keywords: Emotional loneliness, social loneliness, scale development and testing, life span and cultural differences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30177 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously
Authors: S. Mehrab Amiri, Nasser Talebbeydokhti
Abstract:
Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme. In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.
Keywords: Artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8846 Compliance Modelling and Optimization of Kerf during WEDM of Al7075/SiCP Metal Matrix Composite
Authors: Thella Babu Rao, A. Gopala Krishna
Abstract:
This investigation presents the formulation of kerf (width of slit) and optimal control parameter settings of wire electrochemical discharge machining which results minimum possible kerf while machining Al7075/SiCp MMCs. WEDM is proved its efficiency and effectiveness to cut the hard ceramic reinforced MMCs within the permissible budget. Among the distinct performance measures of WEDM process, kerf is an important performance characteristic which determines the dimensional accuracy of the machined component while producing high precision components. The lack of available of the machinability information such advanced MMCs result the more experimentation in the manufacturing industries. Therefore, extensive experimental investigations are essential to provide the database of effect of various control parameters on the kerf while machining such advanced MMCs in WEDM. Literature reviled the significance some of the electrical parameters which are prominent on kerf for machining distinct conventional materials. However, the significance of reinforced particulate size and volume fraction on kerf is highlighted in this work while machining MMCs along with the machining parameters of pulse-on time, pulse-off time and wire tension. Usually, the dimensional tolerances of machined components are decided at the design stage and a machinist pay attention to produce the required dimensional tolerances by setting appropriate machining control variables. However, it is highly difficult to determine the optimal machining settings for such advanced materials on the shop floor. Therefore, in the view of precision of cut, kerf (cutting width) is considered as the measure of performance for the model. It was found from the literature that, the machining conditions of higher fractions of large size SiCp resulting less kerf where as high values of pulse-on time result in a high kerf. A response surface model is used to predict the relative significance of various control variables on kerf. Consequently, a powerful artificial intelligence called genetic algorithms (GA) is used to determine the best combination of the control variable settings. In the next step the conformation test was conducted for the optimal parameter settings and found good agreement between the GA kerf and measured kerf. Hence, it is clearly reveal that the effectiveness and accuracy of the developed model and program to analyze the kerf and to determine its optimal process parameters. The results obtained in this work states that, the resulted optimized parameters are capable of machining the Al7075/SiCp MMCs more efficiently and with better dimensional accuracy.
Keywords: Al7075SiCP MMC, kerf, WEDM, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20195 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety critical incident to raise awareness of biases in systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the Methodology used to model and analyse the safety-critical incident. The SIRI Methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the Management Oversight and Risk Tree technique. The benefits of the SIRI Methodology are threefold: first is that it incorporates “Heuristics and Biases” approach, in the Management Oversight and Risk Tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling technique. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organisational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signalling firms and transport planners, and front-line staff such that lessons learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner’s and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision making and risk management processes and practices in the IEC 15288 Systems Engineering standard, and in the industrial context such as the GB railways and Artificial Intelligence (AI) contexts as well.
Keywords: Accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3814 Cardiac Biosignal and Adaptation in Confined Nuclear Submarine Patrol
Authors: B. Lefranc, C. Aufauvre-Poupon, C. Martin-Krumm, M. Trousselard
Abstract:
Isolated and confined environments (ICE) present several challenges which may adversely affect human’s psychology and physiology. Submariners in Sub-Surface Ballistic Nuclear (SSBN) mission exposed to these environmental constraints must be able to perform complex tasks as part of their normal duties, as well as during crisis periods when emergency actions are required or imminent. The operational and environmental constraints they face contribute to challenge human adaptability. The impact of such a constrained environment has yet to be explored. Establishing a knowledge framework is a determining factor, particularly in view of the next long space travels. Ensuring that the crews are maintained in optimal operational conditions is a real challenge because the success of the mission depends on them. This study focused on the evaluation of the impact of stress on mental health and sensory degradation of submariners during a mission on SSBN using cardiac biosignal (heart rate variability, HRV) clustering. This is a pragmatic exploratory study of a prospective cohort included 19 submariner volunteers. HRV was recorded at baseline to classify by clustering the submariners according to their stress level based on parasympathetic (Pa) activity. Impacts of high Pa (HPa) versus low Pa (LPa) level at baseline were assessed on emotional state and sensory perception (interoception and exteroception) as a cardiac biosignal during the patrol and at a recovery time one month after. Whatever the time, no significant difference was found in mental health between groups. There are significant differences in the interoceptive, exteroceptive and physiological functioning during the patrol and at recovery time. To sum up, compared to the LPa group, the HPa maintains a higher level in psychosensory functioning during the patrol and at recovery but exhibits a decrease in Pa level. The HPa group has less adaptable HRV characteristics, less unpredictability and flexibility of cardiac biosignals while the LPa group increases them during the patrol and at recovery time. This dissociation between psychosensory and physiological adaptation suggests two treatment modalities for ICE environments. To our best knowledge, our results are the first to highlight the impact of physiological differences in the HRV profile on the adaptability of submariners. Further studies are needed to evaluate the negative emotional and cognitive effects of ICEs based on the cardiac profile. Artificial intelligence offers a promising future for maintaining high level of operational conditions. These future perspectives will not only allow submariners to be better prepared, but also to design feasible countermeasures that will help support analog environments that bring us closer to a trip to Mars.Keywords: Adaptation, exteroception, HRV, ICE, interoception, SSBN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4943 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.
Keywords: Automotive industry, Industry 4.0, internet of things, IATF 16949:2016, measurement system analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9962 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment
Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi
Abstract:
Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.
Keywords: Electric power consumption, LED color, LED lighting, plant factory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13481 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: A. Appe, B. Poluparthi, L. Kasivajjula, U. Mv, S. Bagadi, P. Modi, A. Singh, H. Gunupudi, S. Troiano, J. Paul, J. Stovall, J. Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data are considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP (SHapley Additive exPlanations), to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since it is data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for e.g., quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP, a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.
Keywords: Competition, DAGs, hospital, healthcare, machine learning, market share, random forest, SHAP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 286