Search results for: motor for washing machine
1869 A Model-Reference Sliding Mode for Dual-Stage Actuator Servo Control in HDD
Authors: S. Sonkham, U. Pinsopon, W. Chatlatanagulchai
Abstract:
This paper presents a method of sliding mode control (SMC) designing and developing for the servo system in a dual-stage actuator (DSA) hard disk drive. Mathematical modelling of hard disk drive actuators is obtained, extracted from measuring frequency response of the voice-coil motor (VCM) and PZT micro-actuator separately. Matlab software tools are used for mathematical model estimation and also for controller design and simulation. A model-reference approach for tracking requirement is selected as a proposed technique. The simulation results show that performance of a model-reference SMC controller design in DSA servo control can be satisfied in the tracking error, as well as keeping the positioning of the head within the boundary of +/-5% of track width under the presence of internal and external disturbance. The overall results of model-reference SMC design in DSA are met per requirement specifications and significant reduction in %off track is found when compared to the single-state actuator (SSA).Keywords: hard disk drive, dual-stage actuator, track following, hdd servo control, sliding mode control, model-reference, tracking control
Procedia PDF Downloads 3641868 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 841867 Neural Networks Models for Measuring Hotel Users Satisfaction
Authors: Asma Ameur, Dhafer Malouche
Abstract:
Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring
Procedia PDF Downloads 1361866 A Novel Method to Manufacture Superhydrophobic and Insulating Polyester Nanofibers via a Meso-Porous Aerogel Powder
Authors: Z. Mazrouei-Sebdani, A. Khoddami, H. Hadadzadeh, M. Zarrebini
Abstract:
Silica aerogels are well-known meso-porous materials with high specific surface area (500–1000 m2/g), high porosity (80–99.8%), and low density (0.003–0.8 g/cm3). However, the silica aerogels generally are highly brittle due to their nanoporous nature. Physical and mechanical properties of the silica aerogels can be enhanced by compounding with the fibers. Although some reports presented incorporation of the fibers into the sol, followed by further modification and drying stages, no information regarding the aerogel powders as filler in the polymeric fibers is available. In this research, waterglass based aerogel powder was prepared in the following steps: sol–gel process to prepare a gel, followed by subsequent washing with propan-2-ol, n-Hexane, and TMCS, then ambient pressure drying, and ball milling. Inspired by limited dust releasing, aerogel powder was introduced to the PET electrospinning solution in an attempt to create required bulk and surface structure for the nano fibers to improve their hydrophobic and insulation properties. The samples evaluation was carried out by measuring density, porosity, contact angle, sliding angle, heat transfer, FTIR, BET and SEM. According to the results, porous silica aerogel powder was fabricated with mean pore diameter of 24 nm and contact angle of 145.9º. The results indicated the usefulness of the aerogel powder confined into nano fibers to control surface roughness for manipulating superhydrophobic nanowebs with sliding angle of 5˚ and water contact angle of 147º. It can be due to a multi-scale surface roughness which was created by nanowebs structure itself and nano fibers surface irregularity in presence of the aerogels while a laye of fluorocarbon created low surface energy. The wettability of a solid substrate is an important property that is controlled by both the chemical composition and geometry of the surface. Also, a decreasing trend in the heat transfer was observed from 22% for the nano fibers without any aerogel powder to 8% for the nano fibers with 4% aerogel powder. The development of thermal insulating materials has become increasingly more important than ever in view of the fossil energy depletion and global warming that call for more demanding energy-saving practices.Keywords: Superhydrophobicity, Insulation, Sol-gel, Surface energy, Roughness.
Procedia PDF Downloads 3251865 Artificial Intelligence Created Inventions
Authors: John Goodhue, Xiaonan Wei
Abstract:
Current legal decisions and policies regarding the naming as artificial intelligence as inventor are reviewed with emphasis on the recent decisions by the European Patent Office regarding the DABUS inventions holding that an artificial intelligence machine cannot be an inventor. Next, a set of hypotheticals is introduced and examined to better understand how artificial intelligence might be used to create or assist in creating new inventions and how application of existing or proposed changes in the law would affect the ability to protect these inventions including due to restrictions on artificial intelligence for being named as inventors, ownership of inventions made by artificial intelligence, and the effects on legal standards for inventiveness or obviousness.Keywords: Artificial intelligence, innovation, invention, patent
Procedia PDF Downloads 1711864 Multi-Agent System Based Solution for Operating Agile and Customizable Micro Manufacturing Systems
Authors: Dylan Santos De Pinho, Arnaud Gay De Combes, Matthieu Steuhlet, Claude Jeannerat, Nabil Ouerhani
Abstract:
The Industry 4.0 initiative has been launched to address huge challenges related to ever-smaller batch sizes. The end-user need for highly customized products requires highly adaptive production systems in order to keep the same efficiency of shop floors. Most of the classical Software solutions that operate the manufacturing processes in a shop floor are based on rigid Manufacturing Execution Systems (MES), which are not capable to adapt the production order on the fly depending on changing demands and or conditions. In this paper, we present a highly modular and flexible solution to orchestrate a set of production systems composed of a micro-milling machine-tool, a polishing station, a cleaning station, a part inspection station, and a rough material store. The different stations are installed according to a novel matrix configuration of a 3x3 vertical shelf. The different cells of the shelf are connected through horizontal and vertical rails on which a set of shuttles circulate to transport the machined parts from a station to another. Our software solution for orchestrating the tasks of each station is based on a Multi-Agent System. Each station and each shuttle is operated by an autonomous agent. All agents communicate with a central agent that holds all the information about the manufacturing order. The core innovation of this paper lies in the path planning of the different shuttles with two major objectives: 1) reduce the waiting time of stations and thus reduce the cycle time of the entire part, and 2) reduce the disturbances like vibration generated by the shuttles, which highly impacts the manufacturing process and thus the quality of the final part. Simulation results show that the cycle time of the parts is reduced by up to 50% compared with MES operated linear production lines while the disturbance is systematically avoided for the critical stations like the milling machine-tool.Keywords: multi-agent systems, micro-manufacturing, flexible manufacturing, transfer systems
Procedia PDF Downloads 1291863 Effect of the Workpiece Position on the Manufacturing Tolerances
Authors: Rahou Mohamed , Sebaa Fethi, Cheikh Abdelmadjid
Abstract:
Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing but also the manufacturing constraints, for example geometrical defects of the machine, vibration, and the wear of the cutting tool. The choice of positioning has an important influence on the cost and quality of manufacture. To avoid this problem, a two-step approach have been developed. The first step is dedicated to the determination of the optimum position. As for the second step, a study was carried out for the tightening effect on the tolerance interval.Keywords: dispersion, tolerance, manufacturing, position
Procedia PDF Downloads 3361862 Optimizing the Use of Google Translate in Translation Teaching: A Case Study at Prince Sultan University
Authors: Saadia Elamin
Abstract:
The quasi-universal use of smart phones with internet connection available all the time makes it a reflex action for translation undergraduates, once they encounter the least translation problem, to turn to the freely available web resource: Google Translate. Like for other translator resources and aids, the use of Google Translate needs to be moderated in such a way that it contributes to developing translation competence. Here, instead of interfering with students’ learning by providing ready-made solutions which might not always fit into the contexts of use, it can help to consolidate the skills of analysis and transfer which students have already acquired. One way to do so is by training students to adhere to the basic principles of translation work. The most important of these is that analyzing the source text for comprehension comes first and foremost before jumping into the search for target language equivalents. Another basic principle is that certain translator aids and tools can be used for comprehension, while others are to be confined to the phase of re-expressing the meaning into the target language. The present paper reports on the experience of making a measured and reasonable use of Google Translate in translation teaching at Prince Sultan University (PSU), Riyadh. First, it traces the development that has taken place in the field of translation in this age of information technology, be it in translation teaching and translator training, or in the real-world practice of the profession. Second, it describes how, with the aim of reflecting this development onto the way translation is taught, senior students, after being trained on post-editing machine translation output, are authorized to use Google Translate in classwork and assignments. Third, the paper elaborates on the findings of this case study which has demonstrated that Google Translate, if used at the appropriate levels of training, can help to enhance students’ ability to perform different translation tasks. This help extends from the search for terms and expressions, to the tasks of drafting the target text, revising its content and finally editing it. In addition, using Google Translate in this way fosters a reflexive and critical attitude towards web resources in general, maximizing thus the benefit gained from them in preparing students to meet the requirements of the modern translation job market.Keywords: Google Translate, post-editing machine translation output, principles of translation work, translation competence, translation teaching, translator aids and tools
Procedia PDF Downloads 4721861 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 2181860 Prevalence, Antimicrobial Susceptibility Pattern and Associated Risk Factors for Salmonella Species and Escherichia Coli from Raw Meat at Butchery Houses in Mekelle, Tigray, Northern Ethiopia
Authors: Haftay Abraha Tadesse, Dawit Gebreegziabiher Hagos, Atsebaha Gebrekidan Kahsay, Mahumd Abdulkader
Abstract:
Background: Salmonella species and Escherichia coli (E. coli) are important foodborne pathogens affecting humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. This study was aimed to determine the prevalence, antimicrobial susceptibility patterns and associated risk factors for Salmonella species and E. coli in raw meat from butchery houses of Mekelle, Northern Ethiopia. Method: A cross-sectional study was conducted from January to December 2019. Socio-demographic data and risk factors were collected using a predesigned questionnaire. Meat samples were collected aseptically from the butchery houses and transported using icebox to Mekelle University, College of Veterinary Sciences for the isolation and identification of Salmonella species and E. coli. Antimicrobial susceptibility patterns were determined using Kirby disc diffusion method. Data obtained were cleaned and entered into Statistical Package for the Social Sciences version 22 and logistic regression models with odds ratio were calculated. P-value < 0.05 was considered as statistically significant. Results: A total of 153 out of 384 (39.8%) of the meat specimens were found to be contaminated. The contamination of Salmonella species and E. coli were 15.6% (n=60) and 20.8%) (n=80), respectively. Mixed contamination (Salmonella species and E. coli) was observed in 13 (3.4 %) of the analyzed. Poor washing hands regularly (AOR = 8.37; 95% CI: 2.75-25.50) and not using gloves during meat handling (AOR=11. 28; 95% CI:(4.69 27.10) were associated with overall bacterial contamination. About 100% of the tested isolates were sensitive to ciprofloxacin, gentamicin, Co trimoxazole , sulphamethoxazole, ceftriaxone, and trimethoprim and ciprofloxacin, gentamicin, and norfloxacine of E. coli and Salmonella species, respectively, while the resistance of amoxyclav_amoxicillin and erythromycin were both isolated bacteria species. The overall multidrug resistance pattern for Salmonella and E. coli were 51.4% (n=19) and 31.8% (14), respectively. Conclusion: Of the 153 (153/384) contaminated raw meat, 60 (15.6%) and 80 (20.8%) were contaminated by Salmonella species and E. coli, respectively. Poor handwashing practice and not using glove during meat handling showed a significant association with bacterial contamination. Multidrug-resistant showed in Salmonella species, and E. coli were 19 (51.4%) and 14 (31.8%), respectively.Keywords: antimicrobial susceptibility test, butchery houses, E. coli, raw meat, salmonella species
Procedia PDF Downloads 1721859 Air Conditioner Refrigerant and Burn: A Case Report
Authors: Okan Cakir, Ibrahim Arziman, Derya Can, Mete Erkencigil, Murat Durusu, S. Mehmet Yasar
Abstract:
Introduction: Burn injuries from different types and ways commonly seen in emergency departments, approach and treatment varies from outpatient treatment to critical care unit. We wanted to mention a rare burn injury cause of air conditioner refrigerant. Case report: A 22-year-old case admitted to emergency department with a complaint of left hand burn injury and pain. In his history, he said that an accident was occurred before 30 minutes from admission while he had been trying to repair the air conditioner. Air conditioner refrigerant suddenly had erupted from its tank and burned his hand. In physical examination of extremities, second-degree burn bullae on the left hand on second and third proximal phalanx, between first and second phalanx palmar side and on hypothenar region and on third and fourth proximal phalanx and also hyperemia from hand to wrist were seen. There was no motor and sensorial deficiency. As a treatment, local silver sulfadiazine applied to the burn area and analgesic prescribed. The case called for the clinical follow-up to the plastic surgery department. Conclusion: The clinician should take a comprehensive and careful anamnesis for suitable and right management and treatment as in this case in which as well as rare and occurs different way.Keywords: air conditioner refrigerant, burn, emergency department, rare
Procedia PDF Downloads 3401858 Using Textual Pre-Processing and Text Mining to Create Semantic Links
Authors: Ricardo Avila, Gabriel Lopes, Vania Vidal, Jose Macedo
Abstract:
This article offers a approach to the automatic discovery of semantic concepts and links in the domain of Oil Exploration and Production (E&P). Machine learning methods combined with textual pre-processing techniques were used to detect local patterns in texts and, thus, generate new concepts and new semantic links. Even using more specific vocabularies within the oil domain, our approach has achieved satisfactory results, suggesting that the proposal can be applied in other domains and languages, requiring only minor adjustments.Keywords: semantic links, data mining, linked data, SKOS
Procedia PDF Downloads 1781857 A Novel Machine Learning Approach to Aid Agrammatism in Non-fluent Aphasia
Authors: Rohan Bhasin
Abstract:
Agrammatism in non-fluent Aphasia Cases can be defined as a language disorder wherein a patient can only use content words ( nouns, verbs and adjectives ) for communication and their speech is devoid of functional word types like conjunctions and articles, generating speech of with extremely rudimentary grammar . Past approaches involve Speech Therapy of some order with conversation analysis used to analyse pre-therapy speech patterns and qualitative changes in conversational behaviour after therapy. We describe this approach as a novel method to generate functional words (prepositions, articles, ) around content words ( nouns, verbs and adjectives ) using a combination of Natural Language Processing and Deep Learning algorithms. The applications of this approach can be used to assist communication. The approach the paper investigates is : LSTMs or Seq2Seq: A sequence2sequence approach (seq2seq) or LSTM would take in a sequence of inputs and output sequence. This approach needs a significant amount of training data, with each training data containing pairs such as (content words, complete sentence). We generate such data by starting with complete sentences from a text source, removing functional words to get just the content words. However, this approach would require a lot of training data to get a coherent input. The assumptions of this approach is that the content words received in the inputs of both text models are to be preserved, i.e, won't alter after the functional grammar is slotted in. This is a potential limit to cases of severe Agrammatism where such order might not be inherently correct. The applications of this approach can be used to assist communication mild Agrammatism in non-fluent Aphasia Cases. Thus by generating these function words around the content words, we can provide meaningful sentence options to the patient for articulate conversations. Thus our project translates the use case of generating sentences from content-specific words into an assistive technology for non-Fluent Aphasia Patients.Keywords: aphasia, expressive aphasia, assistive algorithms, neurology, machine learning, natural language processing, language disorder, behaviour disorder, sequence to sequence, LSTM
Procedia PDF Downloads 1621856 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces
Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi
Abstract:
For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction
Procedia PDF Downloads 941855 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction
Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord
Abstract:
Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.Keywords: automated, manual-handling, risk-assessment, machine-based
Procedia PDF Downloads 1181854 Risk Management Approach for a Secure and Performant Integration of Automated Drug Dispensing Systems in Hospitals
Authors: Hind Bouami, Patrick Millot
Abstract:
Medication dispensing system is a life-critical system whose failure may result in preventable adverse events leading to longer patient stays in hospitals or patient death. Automation has led to great improvements in life-critical systems as it increased safety, efficiency, and comfort. However, critical risks related to medical organization complexity and automated solutions integration can threaten drug dispensing security and performance. Knowledge about the system’s complexity aspects and human machine parameters to control for automated equipment’s security and performance will help operators to secure their automation process and to optimize their system’s reliability. In this context, this study aims to document the operator’s situation awareness about automation risks and parameters involved in automation security and performance. Our risk management approach has been deployed in the North Luxembourg hospital center’s pharmacy, which is equipped with automated drug dispensing systems since 2009. With more than 4 million euros of gains generated, North Luxembourg hospital center’s success story was enabled by the management commitment, pharmacy’s involvement in the implementation and improvement of the automation project, and the close collaboration between the pharmacy and Sinteco’s firm to implement the necessary innovation and organizational actions for automated solutions integration security and performance. An analysis of the actions implemented by the hospital and the parameters involved in automated equipment’s integration security and performance has been made. The parameters to control for automated equipment’s integration security and performance are human aspects (6.25%), technical aspects (50%), and human-machine interaction (43.75%). The implementation of an anthropocentric analysis system before automation would have prevented and optimized the control of risks related to automation.Keywords: Automated drug delivery systems, Hospitals, Human-centered automated system, Risk management
Procedia PDF Downloads 1361853 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 1601852 Advantages of Electrifying Offshore Compression System
Authors: Siva Sankara Arudra, Kamaruzaman Baharuddin, Ir. Ahmed Fadzil Mustafa Kamal, Ir. Abdul Latif Mohamed
Abstract:
The advancement of electrical and electronics technologies has rewarded the oil and gas industry with great opportunities to embed more environmentally solutions into design. Most offshore oil and gas producers have their engineering and production asset goals to promote greater use of environmentally friendly compression system technologies to eliminate hazardous emissions from conventional gas compressor drivers. Therefore, this paper comprehensively elaborates the parametric study conducted in integrating the latest electrical and electronics drives technology into the existing compression system. This study was conducted in aspects of layout, reliability & availability, maintainability, emission, and cost. An existing offshore facility that utilized gas turbines as the driver for gas compression was set as Conventional Case for this study. The Electrification Case will utilize electric motor drives as the driver for the compression system. Findings from this study indicate more advantages in driver electrification compared to conventional compression systems. The findings of this paper can be set as a benchmark for future offshore driver selection for gas compression systems of similar operating parameters and power range.Keywords: turbomachinery, electrification, emission, compression system
Procedia PDF Downloads 1481851 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 1311850 Creating Energy Sustainability in an Enterprise
Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala
Abstract:
As we enter the new era of Artificial Intelligence (AI) and Cloud Computing, we mostly rely on the Machine and Natural Language Processing capabilities of AI, and Energy Efficient Hardware and Software Devices in almost every industry sector. In these industry sectors, much emphasis is on developing new and innovative methods for producing and conserving energy and sustaining the depletion of natural resources. The core pillars of sustainability are economic, environmental, and social, which is also informally referred to as the 3 P's (People, Planet and Profits). The 3 P's play a vital role in creating a core Sustainability Model in the Enterprise. Natural resources are continually being depleted, so there is more focus and growing demand for renewable energy. With this growing demand, there is also a growing concern in many industries on how to reduce carbon emissions and conserve natural resources while adopting sustainability in corporate business models and policies. In our paper, we would like to discuss the driving forces such as Climate changes, Natural Disasters, Pandemic, Disruptive Technologies, Corporate Policies, Scaled Business Models and Emerging social media and AI platforms that influence the 3 main pillars of Sustainability (3P’s). Through this paper, we would like to bring an overall perspective on enterprise strategies and the primary focus on bringing cultural shifts in adapting energy-efficient operational models. Overall, many industries across the globe are incorporating core sustainability principles such as reducing energy costs, reducing greenhouse gas (GHG) emissions, reducing waste and increasing recycling, adopting advanced monitoring and metering infrastructure, reducing server footprint and compute resources (Shared IT services, Cloud computing, and Application Modernization) with the vision for a sustainable environment.Keywords: climate change, pandemic, disruptive technology, government policies, business model, machine learning and natural language processing, AI, social media platform, cloud computing, advanced monitoring, metering infrastructure
Procedia PDF Downloads 1101849 Maackiain Attenuates Alpha-Synuclein Accumulation and Improves 6-OHDA-Induced Dopaminergic Neuron Degeneration in Parkinson's Disease Animal Model
Authors: Shao-Hsuan Chien, Ju-Hui Fu
Abstract:
Parkinson’s disease (PD) is a degenerative disorder of the central nervous system that is characterized by progressive loss of dopaminergic neurons in the substantia nigra pars compacta and motor impairment. Aggregation of α-synuclein in neuronal cells plays a key role in this disease. At present, therapeutics for PD provides moderate symptomatic benefit but is not able to delay the development of this disease. Current efforts for the treatment of PD are to identify new drugs that show slow or arrest progressive course of PD by interfering with a disease-specific pathogenetic process in PD patients. Maackiain is a bioactive compound isolated from the roots of the Chinese herb Sophora flavescens. The purpose of the present study was to assess the potential for maackiain to ameliorate PD in Caenorhabditis elegans models. Our data reveal that maackiain prevents α-synuclein accumulation in the transgenic Caenorhabditis elegans model and also improves dopaminergic neuron degeneration, food-sensing behavior, and life-span in 6-hydroxydopamine-induced Caenorhabditis elegans model, thus indicating its potential as a candidate antiparkinsonian drug.Keywords: maackiain, Parkinson’s disease, dopaminergic neurons, α-Synuclein
Procedia PDF Downloads 1971848 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 3991847 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 1411846 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 1681845 Dissolution of South African Limestone for Wet Flue Gas Desulphurization
Authors: Lawrence Koech, Ray Everson, Hein Neomagus, Hilary Rutto
Abstract:
Wet Flue gas desulphurization (FGD) systems are commonly used to remove sulphur dioxide from flue gas by contacting it with limestone in aqueous phase which is obtained by dissolution. Dissolution is important as it affects the overall performance of a wet FGD system. In the present study, effects of pH, stirring speed, solid to liquid ratio and acid concentration on the dissolution of limestone using an organic acid (adipic acid) were investigated. This was investigated using the pH stat apparatus. Calcium ions were analyzed at the end of each experiment using Atomic Absorption (AAS) machine.Keywords: desulphurization, limestone, dissolution, pH stat apparatus
Procedia PDF Downloads 4591844 DQN for Navigation in Gazebo Simulator
Authors: Xabier Olaz Moratinos
Abstract:
Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.Keywords: machine learning, DQN, gazebo, navigation
Procedia PDF Downloads 1111843 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 1271842 The Comparison of Backward and Forward Running Program on Balance Development and Plantar Flexion Force in Pre Seniors: Healthy Approach
Authors: Neda Dekamei, Mostafa Sarabzadeh, Masoumeh Bigdeli
Abstract:
Backward running is commonly used in different sports conditioning, motor learning, and neurological purposes, and even more commonly in physical rehabilitation. The present study evaluated the effects of six weeks backward and forward running methods on balance promotion adaptation in students. 12 male and female preseniors with the age range of 45-60 years participated and were randomly classified into two groups of backward running (n: 6) and forward running (n: 6) training interventions. During six weeks, 3 sessions per week, all subjects underwent stated different models of backward and forward running training on treadmill (65-80 of HR max). Pre and post-tests were performed by force plate and electromyogram, two times before and after intervention. Data were analyzed using by T test. On the basis of obtained data, significant differences were recorded on balance and plantar flexion force in backward running (BR) and no difference for forward running (FR). It seems the training model of backward running can generate more stimulus to achieve better plantar flexion force and strengthening ankle protectors which leads to balance improvement in pre aging period. It can be recommended as an effective method to promote seniors life quality especially in balance neuromuscular parameters.Keywords: backward running, balance, plantar flexion, pre seniors
Procedia PDF Downloads 1651841 The Causes and Effects of Poor Household Sanitation: Case Study of Kansanga Parish
Authors: Rosine Angelique Uwacu
Abstract:
Poor household sanitation is rife in Uganda, especially in Kampala. This study was carried out with he goal of establishing the main causes and effects of poor household sanitation in Kansanga parish. The study objectively sought to: To identify various ways through which wastes are generated and disposed of in Kansanga parish, identify different hygiene procedures/behaviors of waste handling in Kansanga parish and assess health effects of poor household sanitation and suggest the recommended appropriate measures of addressing cases of lack of hygiene in Kansanga parish. The study used a survey method where cluster sampling was employed. This is because there is no register of population or sufficient information, or geographic distribution of individuals is widely scattered. Data was collected through the use of interviews accompanied by observation and questionnaires. The study involved a sample of 100 households. The study revealed that; some households use wheeled bin collection, skip hire and roll on/off contained others take their wastes to refuse collection vehicles. Surprisingly, majority of the households submitted that they use polythene bags 'Kavera' and at times plastic sacs to dispose of their wastes which are dumped in drainage patterns or dustbins and other illegal dumping site. The study showed that washing hands with small jerrycans after using the toilet was being adopted by most households as there were no or few other alternatives. The study revealed that the common health effects that come as a result of poor household sanitation in Kansanga Parish are diseases outbreaks such as malaria, typhoid and diarrhea. Finally, the study gave a number of recommendations or suggestions on maintaining and achieving an adequate household sanitation in Kansanga Parish such as sensitization of community members by their leaders like Local Counselors could help to improve the situation, establishment of community sanitation days for people to collectively and voluntarily carry out good sanitation practices like digging trenches, burning garbage and proper waste management and disposal. Authorities like Kampala Capital City Authority should distribute dumping containers or allocate dumping sites where people can dispose of their wastes preferably at a minimum cost for proper management.Keywords: household sanitation, kansanga parish, Uganda, waste
Procedia PDF Downloads 1881840 Optimal Design of Propellant Grain Shape Based on Structural Strength Analysis
Authors: Chen Xiong, Tong Xin, Li Hao, Xu Jin-Sheng
Abstract:
Experiment and simulation researches on the structural integrity of propellant grain in solid rocket motor (SRM) with high volumetric fraction were conducted. First, by using SRM parametric modeling functions with secondary development tool Python of ABAQUS, the three dimensional parameterized modeling programs of star shaped grain, wheel shaped grain and wing cylindrical grain were accomplished. Then, the mechanical properties under different loads for star shaped grain were obtained with the application of automatically established finite element model in ABAQUS. Next, several optimization algorithms are introduced to optimize the star shaped grain, wheel shaped grain and wing cylindrical grain. After meeting the demands of burning surface changes and volumetric fraction, the optimum three dimensional shapes of grain were obtained. Finally, by means of parametric modeling functions, pressure data of SRM’s cold pressurization test was directly applied to simulation of grain in terms of mechanical performance. The results verify the reliability and practical of parameterized modeling program of SRM.Keywords: cold pressurization test, ğarametric modeling, structural integrity, propellant grain, SRM
Procedia PDF Downloads 359