Search results for: machine failures
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3304

Search results for: machine failures

784 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 57
783 The Effect of Surface Modifiers on the Mechanical and Morphological Properties of Waste Silicon Carbide Filled High-Density Polyethylene

Authors: R. Dangtungee, A. Rattanapan, S. Siengchin

Abstract:

Waste silicon carbide (waste SiC) filled high-density polyethylene (HDPE) with and without surface modifiers were studied. Two types of surface modifiers namely; high-density polyethylene-grafted-maleic anhydride (HDPE-g-MA) and 3-aminopropyltriethoxysilane have been used in this study. The composites were produced using a two roll mill, extruder and shaped in a hydraulic compression molding machine. The mechanical properties of polymer composites such as flexural strength and modulus, impact strength, tensile strength, stiffness and hardness were investigated over a range of compositions. It was found that, flexural strength and modulus, tensile modulus and hardness increased, whereas impact strength and tensile strength decreased with the increasing in filler contents, compared to the neat HDPE. At similar filler content, the effect of both surface modifiers increased flexural modulus, impact strength, tensile strength and stiffness but reduced the flexural strength. Morphological investigation using SEM revealed that the improvement in mechanical properties was due to enhancement of the interfacial adhesion between waste SiC and HDPE.

Keywords: high-density polyethylene, HDPE-g-MA, mechanical properties, morphological properties, silicon carbide, waste silicon carbide

Procedia PDF Downloads 363
782 Mechanical Properties of Spark Plasma Sintered 2024 AA Reinforced with TiB₂ and Nano Yttrium

Authors: Suresh Vidyasagar Chevuri, D. B. Karunakar Chevuri

Abstract:

The main advantages of 'Metal Matrix Nano Composites (MMNCs)' include excellent mechanical performance, good wear resistance, low creep rate, etc. The method of fabrication of MMNCs is quite a challenge, which includes processing techniques like Spark Plasma Sintering (SPS), etc. The objective of the present work is to fabricate aluminum based MMNCs with the addition of small amounts of yttrium using Spark Plasma Sintering and to evaluate their mechanical and microstructure properties. Samples of 2024 AA with yttrium ranging from 0.1% to 0.5 wt% keeping 1 wt% TiB2 constant are fabricated by Spark Plasma Sintering (SPS). The mechanical property like hardness is determined using Vickers hardness testing machine. The metallurgical characterization of the samples is evaluated by Optical Microscopy (OM), Field Emission Scanning Electron Microscopy (FE-SEM) and X-Ray Diffraction (XRD). Unreinforced 2024 AA sample is also fabricated as a benchmark to compare its properties with that of the composite developed. It is found that the yttrium addition increases the above-mentioned properties to some extent and then decreases gradually when yttrium wt% increases beyond a point between 0.3 and 0.4 wt%. High density is achieved in the samples fabricated by spark plasma sintering when compared to any other fabrication route, and uniform distribution of yttrium is observed.

Keywords: spark plasma sintering, 2024 AA, yttrium addition, microstructure characterization, mechanical properties

Procedia PDF Downloads 224
781 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 172
780 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet

Authors: Justin Woulfe

Abstract:

Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.

Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics

Procedia PDF Downloads 160
779 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
778 Nelder-Mead Parametric Optimization of Elastic Metamaterials with Artificial Neural Network Surrogate Model

Authors: Jiaqi Dong, Qing-Hua Qin, Yi Xiao

Abstract:

Some of the most fundamental challenges of elastic metamaterials (EMMs) optimization can be attributed to the high consumption of computational power resulted from finite element analysis (FEA) simulations that render the optimization process inefficient. Furthermore, due to the inherent mesh dependence of FEA, minuscule geometry features, which often emerge during the later stages of optimization, induce very fine elements, resulting in enormously high time consumption, particularly when repetitive solutions are needed for computing the objective function. In this study, a surrogate modelling algorithm is developed to reduce computational time in structural optimization of EMMs. The surrogate model is constructed based on a multilayer feedforward artificial neural network (ANN) architecture, trained with prepopulated eigenfrequency data prepopulated from FEA simulation and optimized through regime selection with genetic algorithm (GA) to improve its accuracy in predicting the location and width of the primary elastic band gap. With the optimized ANN surrogate at the core, a Nelder-Mead (NM) algorithm is established and its performance inspected in comparison to the FEA solution. The ANNNM model shows remarkable accuracy in predicting the band gap width and a reduction of time consumption by 47%.

Keywords: artificial neural network, machine learning, mechanical metamaterials, Nelder-Mead optimization

Procedia PDF Downloads 128
777 Traffic Analysis and Prediction Using Closed-Circuit Television Systems

Authors: Aragorn Joaquin Pineda Dela Cruz

Abstract:

Road traffic congestion is continually deteriorating in Hong Kong. The largest contributing factor is the increase in vehicle fleet size, resulting in higher competition over the utilisation of road space. This study proposes a project that can process closed-circuit television images and videos to provide real-time traffic detection and prediction capabilities. Specifically, a deep-learning model involving computer vision techniques for video and image-based vehicle counting, then a separate model to detect and predict traffic congestion levels based on said data. State-of-the-art object detection models such as You Only Look Once and Faster Region-based Convolutional Neural Networks are tested and compared on closed-circuit television data from various major roads in Hong Kong. It is then used for training in long short-term memory networks to be able to predict traffic conditions in the near future, in an effort to provide more precise and quicker overviews of current and future traffic conditions relative to current solutions such as navigation apps.

Keywords: intelligent transportation system, vehicle detection, traffic analysis, deep learning, machine learning, computer vision, traffic prediction

Procedia PDF Downloads 102
776 Short Answer Grading Using Multi-Context Features

Authors: S. Sharan Sundar, Nithish B. Moudhgalya, Nidhi Bhandari, Vineeth Vijayaraghavan

Abstract:

Automatic Short Answer Grading is one of the prime applications of artificial intelligence in education. Several approaches involving the utilization of selective handcrafted features, graphical matching techniques, concept identification and mapping, complex deep frameworks, sentence embeddings, etc. have been explored over the years. However, keeping in mind the real-world application of the task, these solutions present a slight overhead in terms of computations and resources in achieving high performances. In this work, a simple and effective solution making use of elemental features based on statistical, linguistic properties, and word-based similarity measures in conjunction with tree-based classifiers and regressors is proposed. The results for classification tasks show improvements ranging from 1%-30%, while the regression task shows a stark improvement of 35%. The authors attribute these improvements to the addition of multiple similarity scores to provide ensemble of scoring criteria to the models. The authors also believe the work could reinstate that classical natural language processing techniques and simple machine learning models can be used to achieve high results for short answer grading.

Keywords: artificial intelligence, intelligent systems, natural language processing, text mining

Procedia PDF Downloads 133
775 Advanced Driver Assistance System: Veibra

Authors: C. Fernanda da S. Sampaio, M. Gabriela Sadith Perez Paredes, V. Antonio de O. Martins

Abstract:

Today the transport sector is undergoing a revolution, with the rise of Advanced Driver Assistance Systems (ADAS), industry and society itself will undergo a major transformation. However, the technological development of these applications is a challenge that requires new techniques and great machine learning and artificial intelligence. The study proposes to develop a vehicular perception system called Veibra, which consists of two front cameras for day/night viewing and an embedded device capable of working with Yolov2 image processing algorithms with low computational cost. The strategic version for the market is to assist the driver on the road with the detection of day/night objects, such as road signs, pedestrians, and animals that will be viewed through the screen of the phone or tablet through an application. The system has the ability to perform real-time driver detection and recognition to identify muscle movements and pupils to determine if the driver is tired or inattentive, analyzing the student's characteristic change and following the subtle movements of the whole face and issuing alerts through beta waves to ensure the concentration and attention of the driver. The system will also be able to perform tracking and monitoring through GSM (Global System for Mobile Communications) technology and the cameras installed in the vehicle.

Keywords: advanced driver assistance systems, tracking, traffic signal detection, vehicle perception system

Procedia PDF Downloads 155
774 Diagnosis and Analysis of Automated Liver and Tumor Segmentation on CT

Authors: R. R. Ramsheeja, R. Sreeraj

Abstract:

For view the internal structures of the human body such as liver, brain, kidney etc have a wide range of different modalities for medical images are provided nowadays. Computer Tomography is one of the most significant medical image modalities. In this paper use CT liver images for study the use of automatic computer aided techniques to calculate the volume of the liver tumor. Segmentation method is used for the detection of tumor from the CT scan is proposed. Gaussian filter is used for denoising the liver image and Adaptive Thresholding algorithm is used for segmentation. Multiple Region Of Interest(ROI) based method that may help to characteristic the feature different. It provides a significant impact on classification performance. Due to the characteristic of liver tumor lesion, inherent difficulties appear selective. For a better performance, a novel proposed system is introduced. Multiple ROI based feature selection and classification are performed. In order to obtain of relevant features for Support Vector Machine(SVM) classifier is important for better generalization performance. The proposed system helps to improve the better classification performance, reason in which we can see a significant reduction of features is used. The diagnosis of liver cancer from the computer tomography images is very difficult in nature. Early detection of liver tumor is very helpful to save the human life.

Keywords: computed tomography (CT), multiple region of interest(ROI), feature values, segmentation, SVM classification

Procedia PDF Downloads 509
773 Machining Responce of Austempered Ductile Iron with Varying Cutting Speed and Depth of Cut

Authors: Prashant Parhad, Vinayak Dakre, Ajay Likhite, Jatin Bhatt

Abstract:

This work mainly focuses on machinability studies of Austempered Ductile Iron (ADI). The Ductile Iron (DI) was austempered at 250 oC for different durations and the process window for austempering was established by studying the microstructure. The microstructural characterization of the material was done using optical microscopy, SEM and XRD. The samples austempered as per the process window were then subjected to turning using a TiAlN-coated tungsten carbide insert to study the effect of cutting parameters, namely the cutting speed and the depth of cut. The effect was investigated in terms of cutting forces required as well as the surface roughness obtained. The turning was conducted on a CNC turning machine and primary (Fx), radial (Fy) and feed (Fz) cutting forces were quantified with a three-component dynamometer. It was observed that the magnitude of radial force was more than that of primary cutting force for all cutting speed and for various depths of cut studied. It has also been seen that increasing the cutting speed improves the surface quality. The observed machinability behaviour was investigated in light of the microstructure of the material obtained under the given austempering conditions and a structure-property- co-relation was established between the two. For all cutting speed and depth of cut, the best machining response in terms of cutting forces and surface quality was obtained towards the centre of process window.

Keywords: process window, cutting speed, depth of cut, surface roughness

Procedia PDF Downloads 368
772 Eliminating Arm, Neck and Leg Fatigue of United Asia International Plastics Corporation Workers through Rapid Entire Body Assessment

Authors: John Cheferson R. De Belen, John Paul G. Elizares, Ronald John G. Raz, Janina Elyse A. Reyes, Charie G. Salengua, Aristotle L. Soriano

Abstract:

Plastic is a type of synthetic or man-made polymer that can readily be molded into a variety of products. Its usage over the past century has enabled society to make huge technological advances. The workers of United Asia International Plastics Corporation (UAIPC), a plastic manufacturing company performs manual packaging which causes fatigue and stress on their arm, neck, and legs due to extended periods of standing and repetitive motions. With the use of the Fishbone Diagram, Five-Why Analysis, Rapid Entire Body Assessment (REBA), and Anthropometry, the stressful tasks and activities were identified and analyzed. Given the anthropometric measurements obtained from the workers, improved dimensions for the tables and chairs should be used and provide a new packaging machine. The validation of this proposal shall follow after its implementation. By eliminating fatigue during working hours in the production, the workers will be at ease at performing their work properly; productivity will increase that will lead to more profit. Further areas for study include measurement and comparison of the worker’s anthropometric measurement with the industry standard.

Keywords: anthropometry, fishbone diagram, five-why analysis, rapid entire body assessment

Procedia PDF Downloads 264
771 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 576
770 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network

Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui

Abstract:

Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.

Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN

Procedia PDF Downloads 131
769 Breast Cancer Risk is Predicted Using Fuzzy Logic in MATLAB Environment

Authors: S. Valarmathi, P. B. Harathi, R. Sridhar, S. Balasubramanian

Abstract:

Machine learning tools in medical diagnosis is increasing due to the improved effectiveness of classification and recognition systems to help medical experts in diagnosing breast cancer. In this study, ID3 chooses the splitting attribute with the highest gain in information, where gain is defined as the difference between before the split versus after the split. It is applied for age, location, taluk, stage, year, period, martial status, treatment, heredity, sex, and habitat against Very Serious (VS), Very Serious Moderate (VSM), Serious (S) and Not Serious (NS) to calculate the gain of information. The ranked histogram gives the gain of each field for the breast cancer data. The doctors use TNM staging which will decide the risk level of the breast cancer and play an important decision making field in fuzzy logic for perception based measurement. Spatial risk area (taluk) of the breast cancer is calculated. Result clearly states that Coimbatore (North and South) was found to be risk region to the breast cancer than other areas at 20% criteria. Weighted value of taluk was compared with criterion value and integrated with Map Object to visualize the results. ID3 algorithm shows the high breast cancer risk regions in the study area. The study has outlined, discussed and resolved the algorithms, techniques / methods adopted through soft computing methodology like ID3 algorithm for prognostic decision making in the seriousness of the breast cancer.

Keywords: ID3 algorithm, breast cancer, fuzzy logic, MATLAB

Procedia PDF Downloads 519
768 Equation for Predicting Inferior Vena Cava Diameter as a Potential Pointer for Heart Failure Diagnosis among Adult in Azare, Bauchi State, Nigeria

Authors: M. K. Yusuf, W. O. Hamman, U. E. Umana, S. B. Oladele

Abstract:

Background: Dilatation of the inferior vena cava (IVC) is used as the ultrasonic diagnostic feature in patients suspected of congestive heart failure. The IVC diameter has been reported to vary among the various body mass indexes (BMI) and body shape indexes (ABSI). Knowledge of these variations is useful in precision diagnoses of CHF by imaging scientists. Aim: The study aimed to establish an equation for predicting the ultrasonic mean diameter of the IVC among the various BMI/ABSI of inhabitants of Azare, Bauchi State-Nigeria. Methodology: Two hundred physically healthy adult subjects of both sexes were classified into under, normal, over, and obese weights using their BMIs after selection using a structured questionnaire following their informed consent for an abdominal ultrasound scan. The probe was placed on the midline of the body, halfway between the xiphoid process and the umbilicus, with the marker on the probe directed towards the patient's head to obtain a longitudinal view of the IVC. The maximum IVC diameter was measured from the subcostal view using the electronic caliper of the scan machine. The mean value of each group was obtained, and the results were analysed. Results: A novel equation {(IVC Diameter = 1.04 +0.01(X) where X= BMI} has been generated for determining the IVC diameter among the populace. Conclusion: An equation for predicting the IVC diameter from individual BMI values in apparently healthy subjects has been established.

Keywords: equation, ultrasonic, IVC diameter, body adiposities

Procedia PDF Downloads 72
767 Process of Production of an Artisanal Brewery in a City in the North of the State of Mato Grosso, Brazil

Authors: Ana Paula S. Horodenski, Priscila Pelegrini, Salli Baggenstoss

Abstract:

The brewing industry with artisanal concepts seeks to serve a specific market, with diversified production that has been gaining ground in the national environment, also in the Amazon region. This growth is due to the more demanding consumer, with a diversified taste that wants to try new types of beer, enjoying products with new aromas, flavors, as a differential of what is so widely spread through the big industrial brands. Thus, through qualitative research methods, the study aimed to investigate how is the process of managing the production of a craft brewery in a city in the northern State of Mato Grosso (BRAZIL), providing knowledge of production processes and strategies in the industry. With the efficient use of resources, it is possible to obtain the necessary quality and provide better performance and differentiation of the company, besides analyzing the best management model. The research is descriptive with a qualitative approach through a case study. For the data collection, a semi-structured interview was elaborated, composed of the areas: microbrewery characterization, artisan beer production process, and the company supply chain management. Also, production processes were observed during technical visits. With the study, it was verified that the artisan brewery researched develops preventive maintenance strategies with the inputs, machines, and equipment, so that the quality of the product and the production process are achieved. It was observed that the distance from the supplying centers makes the management of processes and the supply chain be carried out with a longer planning time so that the delivery of the final product is satisfactory. The production process of the brewery is composed of machines and equipment that allows the control and quality of the product, which the manager states that for the productive capacity of the industry and its consumer market, the available equipment meets the demand. This study also contributes to highlight one of the challenges for the development of small breweries in front of the market giants, that is, the legislation, which fits the microbreweries as producers of alcoholic beverages. This makes the micro and small business segment to be taxed as a major, who has advantages in purchasing large batches of raw materials and tax incentives because they are large employers and tax pickers. It was possible to observe that the supply chain management system relies on spreadsheets and notes that are done manually, which could be simplified with a computer program to streamline procedures and reduce risks and failures of the manual process. In relation to the control of waste and effluents affected by the industry is outsourced and meets the needs. Finally, the results showed that the industry uses preventive maintenance as a productive strategy, which allows better conditions for the production and quality of artisanal beer. The quality is directly related to the satisfaction of the final consumer, being prized and performed throughout the production process, with the selection of better inputs, the effectiveness of the production processes and the relationship with the commercial partners.

Keywords: artisanal brewery, production management, production processes, supply chain

Procedia PDF Downloads 120
766 A Data-Mining Model for Protection of FACTS-Based Transmission Line

Authors: Ashok Kalagura

Abstract:

This paper presents a data-mining model for fault-zone identification of flexible AC transmission systems (FACTS)-based transmission line including a thyristor-controlled series compensator (TCSC) and unified power-flow controller (UPFC), using ensemble decision trees. Given the randomness in the ensemble of decision trees stacked inside the random forests model, it provides an effective decision on the fault-zone identification. Half-cycle post-fault current and voltage samples from the fault inception are used as an input vector against target output ‘1’ for the fault after TCSC/UPFC and ‘1’ for the fault before TCSC/UPFC for fault-zone identification. The algorithm is tested on simulated fault data with wide variations in operating parameters of the power system network, including noisy environment providing a reliability measure of 99% with faster response time (3/4th cycle from fault inception). The results of the presented approach using the RF model indicate the reliable identification of the fault zone in FACTS-based transmission lines.

Keywords: distance relaying, fault-zone identification, random forests, RFs, support vector machine, SVM, thyristor-controlled series compensator, TCSC, unified power-flow controller, UPFC

Procedia PDF Downloads 423
765 A Kernel-Based Method for MicroRNA Precursor Identification

Authors: Bin Liu

Abstract:

MicroRNAs (miRNAs) are small non-coding RNA molecules, functioning in transcriptional and post-transcriptional regulation of gene expression. The discrimination of the real pre-miRNAs from the false ones (such as hairpin sequences with similar stem-loops) is necessary for the understanding of miRNAs’ role in the control of cell life and death. Since both their small size and sequence specificity, it cannot be based on sequence information alone but requires structure information about the miRNA precursor to get satisfactory performance. Kmers are convenient and widely used features for modeling the properties of miRNAs and other biological sequences. However, Kmers suffer from the inherent limitation that if the parameter K is increased to incorporate long range effects, some certain Kmer will appear rarely or even not appear, as a consequence, most Kmers absent and a few present once. Thus, the statistical learning approaches using Kmers as features become susceptible to noisy data once K becomes large. In this study, we proposed a Gapped k-mer approach to overcome the disadvantages of Kmers, and applied this method to the field of miRNA prediction. Combined with the structure status composition, a classifier called imiRNA-GSSC was proposed. We show that compared to the original imiRNA-kmer and alternative approaches. Trained on human miRNA precursors, this predictor can achieve an accuracy of 82.34 for predicting 4022 pre-miRNA precursors from eleven species.

Keywords: gapped k-mer, imiRNA-GSSC, microRNA precursor, support vector machine

Procedia PDF Downloads 161
764 A Tool to Measure Efficiency and Trust Towards eXplainable Artificial Intelligence in Conflict Detection Tasks

Authors: Raphael Tuor, Denis Lalanne

Abstract:

The ATM research community is missing suitable tools to design, test, and validate new UI prototypes. Important stakes underline the implementation of both DSS and XAI methods into current systems. ML-based DSS are gaining in relevance as ATFM becomes increasingly complex. However, these systems only prove useful if a human can understand them, and thus new XAI methods are needed. The human-machine dyad should work as a team and should understand each other. We present xSky, a configurable benchmark tool that allows us to compare different versions of an ATC interface in conflict detection tasks. Our main contributions to the ATC research community are (1) a conflict detection task simulator (xSky) that allows to test the applicability of visual prototypes on scenarios of varying difficulty and outputting relevant operational metrics (2) a theoretical approach to the explanations of AI-driven trajectory predictions. xSky addresses several issues that were identified within available research tools. Researchers can configure the dimensions affecting scenario difficulty with a simple CSV file. Both the content and appearance of the XAI elements can be customized in a few steps. As a proof-of-concept, we implemented an XAI prototype inspired by the maritime field.

Keywords: air traffic control, air traffic simulation, conflict detection, explainable artificial intelligence, explainability, human-automation collaboration, human factors, information visualization, interpretability, trajectory prediction

Procedia PDF Downloads 160
763 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 270
762 Railway Crane Accident: A Comparative Metallographic Test on Pins Fractured during Operation

Authors: Thiago Viana

Abstract:

Eventually train accidents occur on railways and for some specific cases it is necessary to use a train rescue with a crane positioned under a platform wagon. These tumbled machines are collected and sent to the machine shop or scrap yard. In one of these cranes that were being used to rescue a wagon, occurred a fall of hoist due to fracture of two large pins. The two pins were collected and sent for failure analysis. This work investigates the main cause and the secondary causes for the initiation of the fatigue crack. All standard failure analysis procedures were applied, with careful evaluation of the characteristics of the material, fractured surfaces and, mainly, metallographic tests using an optical microscope to compare the geometry of the peaks and valleys of the thread of the pins and their respective seats. By metallographic analysis, it was concluded that the fatigue cracks were started from a notch (stress concentration) in the valley of the threads of the pin applied to the right side of the crane (pin 1). In this, it was verified that the peaks of the threads of the pin seat did not have proper geometry, with sharp edges being present that caused such notches. The visual analysis showed that fracture of the pin on the left side of the crane (pin 2) was brittle type, being a consequence of the fracture of the first one. Recommendations for this and other railway cranes have been made, such as nondestructive testing, stress calculation, design review, quality control and suitability of the mechanical forming process of the seat threads and pin threads.

Keywords: crane, fracture, pin, railway

Procedia PDF Downloads 108
761 Twitter Sentiment Analysis during the Lockdown on New-Zealand

Authors: Smah Almotiri

Abstract:

One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2020, until April 4, 2020. Natural language processing (NLP), which is a form of Artificial intelligence, was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applying machine learning sentimental methods such as Crystal Feel and extending the size of the sample tweet by using multiple tweets over a longer period of time.

Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS

Procedia PDF Downloads 190
760 Determining Optimal Number of Trees in Random Forests

Authors: Songul Cinaroglu

Abstract:

Background: Random Forest is an efficient, multi-class machine learning method using for classification, regression and other tasks. This method is operating by constructing each tree using different bootstrap sample of the data. Determining the number of trees in random forests is an open question in the literature for studies about improving classification performance of random forests. Aim: The aim of this study is to analyze whether there is an optimal number of trees in Random Forests and how performance of Random Forests differ according to increase in number of trees using sample health data sets in R programme. Method: In this study we analyzed the performance of Random Forests as the number of trees grows and doubling the number of trees at every iteration using “random forest” package in R programme. For determining minimum and optimal number of trees we performed Mc Nemar test and Area Under ROC Curve respectively. Results: At the end of the analysis it was found that as the number of trees grows, it does not always means that the performance of the forest is better than forests which have fever trees. In other words larger number of trees only increases computational costs but not increases performance results. Conclusion: Despite general practice in using random forests is to generate large number of trees for having high performance results, this study shows that increasing number of trees doesn’t always improves performance. Future studies can compare different kinds of data sets and different performance measures to test whether Random Forest performance results change as number of trees increase or not.

Keywords: classification methods, decision trees, number of trees, random forest

Procedia PDF Downloads 395
759 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, fuzzy c means, lesion detection, liver segmentation

Procedia PDF Downloads 325
758 Comparison of Whole-Body Vibration and Plyometric Exercises on Explosive Power in Non-Athlete Girl Students

Authors: Fereshteh Zarei, Mahdi Kohandel

Abstract:

The aim of this study was investigate and compare plyometric and vibration exercises on muscle explosive power in non-athlete female students. For this purpose, 45 female students from non-athletes selected target then divided in to the three groups, two experimental and one control groups. From all groups were getting pre-tested. Experimental A did whole-body vibration exercises involved standing on one of machine vibration with frequency 30 Hz, amplitude 10 mm and in 5 different postures. Training for each position was 40 seconds with 60 seconds rest between it, and each season 5 seconds was added to duration of each body condition, until time up to 2 minutes for each postures. Exercises were done three times a week for 2 month. Experimental group B did plyometric exercises that include jumping, such as horizontal, vertical, and skipping .They included 10 times repeat for 5 set in each season. Intensity with increasing repetitions and sets were added. At this time, asked from control group that keep a daily activity and avoided strength training, explosive power and. after do exercises by groups we measured factors again. One-way analysis of variance and paired t statistical methods were used to analyze the data. There was significant difference in the amount of explosive power between the control and vibration groups (p=0/048) there was significant difference between the control and plyometric groups (019/0 = p). But between vibration and plyometric groups didn't observe significant difference in the amount of explosive power.

Keywords: vibration, plyometric, exercises, explosive power, non-athlete

Procedia PDF Downloads 453
757 Potentials of Additive Manufacturing: An Approach to Increase the Flexibility of Production Systems

Authors: A. Luft, S. Bremen, N. Balc

Abstract:

The task of flexibility planning and design, just like factory planning, for example, is to create the long-term systemic framework that constitutes the restriction for short-term operational management. This is a strategic challenge since, due to the decision defect character of the underlying flexibility problem, multiple types of flexibility need to be considered over the course of various scenarios, production programs, and production system configurations. In this context, an evaluation model has been developed that integrates both conventional and additive resources on a basic task level and allows the quantification of flexibility enhancement in terms of mix and volume flexibility, complexity reduction, and machine capacity. The model helps companies to decide in early decision-making processes about the potential gains of implementing additive manufacturing technologies on a strategic level. For companies, it is essential to consider both additive and conventional manufacturing beyond pure unit costs. It is necessary to achieve an integrative view of manufacturing that incorporates both additive and conventional manufacturing resources and quantifies their potential with regard to flexibility and manufacturing complexity. This also requires a structured process for the strategic production systems design that spans the design of various scenarios and allows for multi-dimensional and comparative analysis. A respective guideline for the planning of additive resources on a strategic level is being laid out in this paper.

Keywords: additive manufacturing, production system design, flexibility enhancement, strategic guideline

Procedia PDF Downloads 124
756 Evolution of Antimicrobial Resistance in Shigella since the Turn of 21st Century, India

Authors: Neelam Taneja, Abhishek Mewara, Ajay Kumar

Abstract:

Multidrug resistant shigellae have emerged as a therapeutic challenge in India. At our 2000 bed tertiary care referral centre in Chandigarh, North India, which caters to a large population of 7 neighboring states, antibiotic resistance in Shigella is being constantly monitored. Shigellae are isolated from 3 to 5% of all stool samples. In 1990 nalidixic acid was the drug of choice as 82%, and 63% of shigellae were resistant to ampicillin and cotrimoxazole respectively. Nalidixic acid resistance emerged in 1992 and rapidly increased from 6% during 1994-98 to 86% by the turn of 21st century. In the 1990s, the WHO recommended ciprofloxacin as the drug of choice for empiric treatment of shigellosis in view of the existing high level resistance to agents like chloramphenicol, ampicillin, cotrimoxazole and nalidixic acid. First resistance to ciprofloxacin in S. flexneri at our centre appeared in 2000 and rapidly rose to 46% in 2007 (MIC>4mg/L). In between we had an outbreak of ciprofloxacin resistant S.dysenteriae serotype 1 in 2003. Therapeutic failures with ciprofloxacin occurred with both ciprofloxacin-resistant S. dysenteriae and ciprofloxacin-resistant S. flexneri. The severity of illness was more with ciprofloxacin-resistant strains. Till 2000, elsewhere in the world ciprofloxacin resistance in S. flexneri was sporadic and uncommon, though resistance to co-trimoxazole and ampicillin was common and in some areas resistance to nalidixic acid had also emerged. Fluoroquinolones due to extensive use and misuse for many other illnesses in our region are thus no longer the preferred group of drugs for managing shigellosis in India. WHO presently recommends ceftriaxone and azithromycin as alternative drugs to fluoroquinolone-resistant shigellae, however, overreliance on this group of drugs also seems to soon become questionable considering the emerging cephalosporin-resistant shigellae. We found 15.1% of S. flexneri isolates collected over a period of 9 years (2000-2009) resistant to at least one of the third-generation cephalosporins (ceftriaxone/cefotaxime). The first isolate showing ceftriaxone resistance was obtained in 2001, and we have observed an increase in number of isolates resistant to third generation cephalosporins in S. flexneri 2005 onwards. This situation has now become a therapeutic challenge in our region. The MIC values for Shigella isolates revealed a worrisome rise for ceftriaxone (MIC90:12 mg/L) and cefepime (MIC90:8 mg/L). MIC values for S. dysenteriae remained below 1 mg/L for ceftriaxone, however for cefepime, the MIC90 has raised to 4 mg/L. These infections caused by ceftriaxone-resistant S. flexneri isolates were successfully treated by azithromycin at our center. Most worrisome development in the present has been the emergence of DSA(Decreased susceptibility to azithromycin) which surfaced in 2001 and has increased from 4.3% till 2011 to 34% thereafter. We suspect plasmid-mediated resistance as we detected qnrS1-positive Shigella for the first time from the Indian subcontinent in 2 strains from 2010, indicating a relatively new appearance of this PMQR determinant among Shigella in India. This calls for a continuous and strong surveillance of antibiotic resistance across the country. The prevention of shigellosis by developing cost-effective vaccines is desirable as it will substantially reduce the morbidity associated with diarrhoea in the country

Keywords: Shigella, antimicrobial, resistance, India

Procedia PDF Downloads 229
755 Customized Design of Amorphous Solids by Generative Deep Learning

Authors: Yinghui Shang, Ziqing Zhou, Rong Han, Hang Wang, Xiaodi Liu, Yong Yang

Abstract:

The design of advanced amorphous solids, such as metallic glasses, with targeted properties through artificial intelligence signifies a paradigmatic shift in physical metallurgy and materials technology. Here, we developed a machine-learning architecture that facilitates the generation of metallic glasses with targeted multifunctional properties. Our architecture integrates the state-of-the-art unsupervised generative adversarial network model with supervised models, allowing the incorporation of general prior knowledge derived from thousands of data points across a vast range of alloy compositions, into the creation of data points for a specific type of composition, which overcame the common issue of data scarcity typically encountered in the design of a given type of metallic glasses. Using our generative model, we have successfully designed copper-based metallic glasses, which display exceptionally high hardness or a remarkably low modulus. Notably, our architecture can not only explore uncharted regions in the targeted compositional space but also permits self-improvement after experimentally validated data points are added to the initial dataset for subsequent cycles of data generation, hence paving the way for the customized design of amorphous solids without human intervention.

Keywords: metallic glass, artificial intelligence, mechanical property, automated generation

Procedia PDF Downloads 56