Search results for: participatory error correction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17174

Search results for: participatory error correction process

16274 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 203
16273 Imputing Missing Data in Electronic Health Records: A Comparison of Linear and Non-Linear Imputation Models

Authors: Alireza Vafaei Sadr, Vida Abedi, Jiang Li, Ramin Zand

Abstract:

Missing data is a common challenge in medical research and can lead to biased or incomplete results. When the data bias leaks into models, it further exacerbates health disparities; biased algorithms can lead to misclassification and reduced resource allocation and monitoring as part of prevention strategies for certain minorities and vulnerable segments of patient populations, which in turn further reduce data footprint from the same population – thus, a vicious cycle. This study compares the performance of six imputation techniques grouped into Linear and Non-Linear models on two different realworld electronic health records (EHRs) datasets, representing 17864 patient records. The mean absolute percentage error (MAPE) and root mean squared error (RMSE) are used as performance metrics, and the results show that the Linear models outperformed the Non-Linear models in terms of both metrics. These results suggest that sometimes Linear models might be an optimal choice for imputation in laboratory variables in terms of imputation efficiency and uncertainty of predicted values.

Keywords: EHR, machine learning, imputation, laboratory variables, algorithmic bias

Procedia PDF Downloads 80
16272 Crude Distillation Process Simulation Using Unisim Design Simulator

Authors: C. Patrascioiu, M. Jamali

Abstract:

The paper deals with the simulation of the crude distillation process using the Unisim Design simulator. The necessity of simulating this process is argued both by considerations related to the design of the crude distillation column, but also by considerations related to the design of advanced control systems. In order to use the Unisim Design simulator to simulate the crude distillation process, the identification of the simulators used in Romania and an analysis of the PRO/II, HYSYS, and Aspen HYSYS simulators were carried out. Analysis of the simulators for the crude distillation process has allowed the authors to elaborate the conclusions of the success of the crude modelling. A first aspect developed by the authors is the implementation of specific problems of petroleum liquid-vapors equilibrium using Unisim Design simulator. The second major element of the article is the development of the methodology and the elaboration of the simulation program for the crude distillation process, using Unisim Design resources. The obtained results validate the proposed methodology and will allow dynamic simulation of the process.  

Keywords: crude oil, distillation, simulation, Unisim Design, simulators

Procedia PDF Downloads 242
16271 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 257
16270 Investigation of User Position Accuracy for Stand-Alone and Hybrid Modes of the Indian Navigation with Indian Constellation Satellite System

Authors: Naveen Kumar Perumalla, Devadas Kuna, Mohammed Akhter Ali

Abstract:

Satellite Navigation System such as the United States Global Positioning System (GPS) plays a significant role in determining the user position. Similar to that of GPS, Indian Regional Navigation Satellite System (IRNSS) is a Satellite Navigation System indigenously developed by Indian Space Research Organization (ISRO), India, to meet the country’s navigation applications. This system is also known as Navigation with Indian Constellation (NavIC). The NavIC system’s main objective, is to offer Positioning, Navigation and Timing (PNT) services to users in its two service areas i.e., covering the Indian landmass and the Indian Ocean. Six NavIC satellites are already deployed in the space and their receivers are in the performance evaluation stage. Four NavIC dual frequency receivers are installed in the ‘Advanced GNSS Research Laboratory’ (AGRL) in the Department of Electronics and Communication Engineering, University College of Engineering, Osmania University, India. The NavIC receivers can be operated in two positioning modes: Stand-alone IRNSS and Hybrid (IRNSS+GPS) modes. In this paper, analysis of various parameters such as Dilution of Precision (DoP), three Dimension (3D) Root Mean Square (RMS) Position Error and Horizontal Position Error with respect to Visibility of Satellites is being carried out using the real-time IRNSS data, obtained by operating the receiver in both positioning modes. Two typical days (6th July 2017 and 7th July 2017) are considered for Hyderabad (Latitude-17°24'28.07’N, Longitude-78°31'4.26’E) station are analyzed. It is found that with respect to the considered parameters, the Hybrid mode operation of NavIC receiver is giving better results than that of the standalone positioning mode. This work finds application in development of NavIC receivers for civilian navigation applications.

Keywords: DoP, GPS, IRNSS, GNSS, position error, satellite visibility

Procedia PDF Downloads 207
16269 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 196
16268 Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study

Authors: M. C. Paliwal, A. K. Jain, S. K. Katiyar

Abstract:

Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India.

Keywords: resolution, accuracy assessment, land use mapping, satellite imagery, ground truth data, error matrices

Procedia PDF Downloads 502
16267 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 123
16266 Teaching Accounting through Critical Accounting Research: The Origin and Its Relevance to the South African Curriculum

Authors: Rosy Makeresemese Qhosola

Abstract:

South Africa has maintained the effort to uphold its guiding principles in terms of its constitution. The constitution upholds principles such as equity, social justice, peace, freedom and hope, to mention but a few. So, such principles are made to form the basis for any legislation and policies that are in place to guide all fields/departments of government. Education is one of those departments or fields and is expected to abide by such principles as outlined in their policies. Therefore, as expected education policies and legislation outline their intentions to ensure the development of students’ clear critical thinking capacity as well as their creative capacities by creating learning contexts and opportunities that accommodate the effective teaching and learning strategies, that are learner centered and are compatible with the prescripts of a democratic constitution of the country. The paper aims at exploring and analyzing the progress of conventional accounting in terms of its adherence to the effective use of principles of good teaching, as per policy expectations in South Africa. The progress is traced by comparing conventional accounting to Critical Accounting Research (CAR), where the history of accounting as intended in the curriculum of SA and CAR are highlighted. Critical Accounting Research framework is used as a lens and mode of teaching in this paper, since it can create a space for the learning of accounting that is optimal marked by the use of more learner-centred methods of teaching. The Curriculum of South Africa also emphasises the use of more learner-centred methods of teaching that encourage an active and critical approach to learning, rather than rote and uncritical learning of given truths. The study seeks to maintain that conventional accounting is in contrast with principles of good teaching as per South African policy expectations. The paper further maintains that, the possible move beyond it and the adherence to the effective use of good teaching, could be when CAR forms the basis of teaching. Data is generated through Participatory Action Research where the meetings, dialogues and discussions with the focused groups are conducted, which consists of lecturers, students, subject heads, coordinators and NGO’s as well as departmental officials. The results are analysed through Critical Discourse Analysis since it allows for the use of text by participants. The study concludes that any teacher who aspires to achieve in the teaching and learning of accounting should first meet the minimum requirements as stated in the NQF level 4, which forms the basic principles of good teaching and are in line with Critical Accounting Research.

Keywords: critical accounting research, critical discourse analysis, participatory action research, principles of good teaching

Procedia PDF Downloads 303
16265 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 342
16264 Simulation-Based Optimization Approach for an Electro-Plating Production Process Based on Theory of Constraints and Data Envelopment Analysis

Authors: Mayada Attia Ibrahim

Abstract:

Evaluating and developing the electroplating production process is a key challenge in this type of process. The process is influenced by several factors such as process parameters, process costs, and production environments. Analyzing and optimizing all these factors together requires extensive analytical techniques that are not available in real-case industrial entities. This paper presents a practice-based framework for the evaluation and optimization of some of the crucial factors that affect the costs and production times associated with this type of process, energy costs, material costs, and product flow times. The proposed approach uses Design of Experiments, Discrete-Event Simulation, and Theory of Constraints were respectively used to identify the most significant factors affecting the production process and simulate a real production line to recognize the effect of these factors and assign possible bottlenecks. Several scenarios are generated as corrective strategies for improving the production line. Following that, data envelopment analysis CCR input-oriented DEA model is used to evaluate and optimize the suggested scenarios.

Keywords: electroplating process, simulation, design of experiment, performance optimization, theory of constraints, data envelopment analysis

Procedia PDF Downloads 96
16263 Treatment of Cutting Oily-Wastewater by Sono-Fenton Process: Experimental Approach and Combined Process

Authors: Pisut Painmanakul, Thawatchai Chintateerachai, Supanid Lertlapwasin, Nusara Rojvilavan, Tanun Chalermsinsuwan, Nattawin Chawaloesphonsiya, Onanong Larpparisudthi

Abstract:

Conventional coagulation, advance oxidation process (AOPs), and the combined process were evaluated and compared for its suitability to treat the stabilized cutting-oil wastewater. The 90% efficiency was obtained from the coagulation at Al2(SO4)3 dosage of 150 mg/L and pH 7. On the other hands, efficiencies of AOPs for 30 minutes oxidation time were 10% for acoustic oxidation, 12% for acoustic oxidation with hydrogen peroxide, 76% for Fenton, and 92% sono-Fenton processes. The highest efficiency for effective oil removal of AOPs required large amount of chemical. Therefore, AOPs were studied as a post-treatment after conventional separation process. The efficiency was considerable as the effluent COD can pass the standard required for industrial wastewater discharge with less chemical and energy consumption.

Keywords: cutting oily-wastewater, advance oxidation process, sono-fenton, combined process

Procedia PDF Downloads 352
16262 Effective Student Engaging Strategies to Enhance Academic Learning in Middle Eastern Classrooms: An Action Research Approach

Authors: Anjum Afrooze

Abstract:

The curriculum at General Sciences department in Prince Sultan University includes ‘Physical science’ for Computer Science, Information Technology and Business courses. Students are apathetic towards Physical Science and question, as to, ‘How this course is related to their majors?’ English is not a native language for the students and also for many instructors. More than sixty percent of the students come from institutions where English is not the medium of instruction, which makes student learning and academic achievement challenging. After observing the less enthusiastic student cohort for two consecutive semesters, the instructor was keen to find effective strategies to enhance learning and further encourage deep learning by engaging students in different tasks to empower them with necessary skills and motivate them. This study is participatory action research, in which instructor designs effective tasks to engage students in their learning. The study is conducted through two semesters with a total of 200 students. The effectiveness of this approach is studied using questionnaire at the end of each semester and teacher observation. Major outcomes of this study were overall improvement in students attitude towards science learning, enhancement of multiple skills like note taking, problem solving, language proficiency and also fortifying confidence. This process transformed instructor into engaging and reflecting practitioner. Also, these strategies were implemented by other instructors teaching the course and proved effective in opening a path to changes in related areas of the course curriculum. However, refinement in the strategies could be done based on student evaluation and instructors observation.

Keywords: group activity, language proficiency, reasoning skills, science learning

Procedia PDF Downloads 141
16261 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform

Procedia PDF Downloads 300
16260 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 132
16259 Parametric Studies of Ethylene Dichloride Purification Process

Authors: Sh. Arzani, H. Kazemi Esfeh, Y. Galeh Zadeh, V. Akbari

Abstract:

Ethylene dichloride is a colorless liquid with a smell like chloroform. EDC is classified in the simple hydrocarbon group which is obtained from chlorinating ethylene gas. Its chemical formula is C2H2Cl2 which is used as the main mediator in VCM production. Therefore, the purification process of EDC is important in the petrochemical process. In this study, the purification unit of EDC was simulated, and then validation was performed. Finally, the impact of process parameter was studied for the degree of EDC purity. The results showed that by increasing the feed flow, the reflux impure combinations increase and result in an EDC purity decrease.

Keywords: ethylene dichloride, purification, edc, simulation

Procedia PDF Downloads 312
16258 Optimised Path Recommendation for a Real Time Process

Authors: Likewin Thomas, M. V. Manoj Kumar, B. Annappa

Abstract:

Traditional execution process follows the path of execution drawn by the process analyst without observing the behaviour of resource and other real-time constraints. Identifying process model, predicting the behaviour of resource and recommending the optimal path of execution for a real time process is challenging. The proposed AlfyMiner: αyM iner gives a new dimension in process execution with the novel techniques Process Model Analyser: PMAMiner and Resource behaviour Analyser: RBAMiner for recommending the probable path of execution. PMAMiner discovers next probable activity for currently executing activity in an online process using variant matching technique to identify the set of next probable activity, among which the next probable activity is discovered using decision tree model. RBAMiner identifies the resource suitable for performing the discovered next probable activity and observe the behaviour based on; load and performance using polynomial regression model, and waiting time using queueing theory. Based on the observed behaviour αyM iner recommend the probable path of execution with; next probable activity and the best suitable resource for performing it. Experiments were conducted on process logs of CoSeLoG Project1 and 72% of accuracy is obtained in identifying and recommending next probable activity and the efficiency of resource performance was optimised by 59% by decreasing their load.

Keywords: cross-organization process mining, process behaviour, path of execution, polynomial regression model

Procedia PDF Downloads 330
16257 Optimization of Assay Parameters of L-Glutaminase from Bacillus cereus MTCC1305 Using Artificial Neural Network

Authors: P. Singh, R. M. Banik

Abstract:

Artificial neural network (ANN) was employed to optimize assay parameters viz., time, temperature, pH of reaction mixture, enzyme volume and substrate concentration of L-glutaminase from Bacillus cereus MTCC 1305. ANN model showed high value of coefficient of determination (0.9999), low value of root mean square error (0.6697) and low value of absolute average deviation. A multilayer perceptron neural network trained with an error back-propagation algorithm was incorporated for developing a predictive model and its topology was obtained as 5-3-1 after applying Levenberg Marquardt (LM) training algorithm. The predicted activity of L-glutaminase was obtained as 633.7349 U/l by considering optimum assay parameters, viz., pH of reaction mixture (7.5), reaction time (20 minutes), incubation temperature (35˚C), substrate concentration (40mM), and enzyme volume (0.5ml). The predicted data was verified by running experiment at simulated optimum assay condition and activity was obtained as 634.00 U/l. The application of ANN model for optimization of assay conditions improved the activity of L-glutaminase by 1.499 fold.

Keywords: Bacillus cereus, L-glutaminase, assay parameters, artificial neural network

Procedia PDF Downloads 428
16256 Moving toward Language Acquisition: A Case Study Adapting and Applying Laban Movement Analysis in the International English as an Additional Language Classroom

Authors: Andra Yount

Abstract:

The purpose of this research project is to understand how focusing on movement can help English language learners acquire better reading, writing, and speaking skills. More specifically, this case study tests how Laban movement analysis, a tool often used in dance and physical education classes, contributes to advanced-level high school students’ English language acquisition at an international Swiss boarding school. This article shares theoretical bases for and findings from a teaching experiment in which LMA categories (body, effort, space, and shape) were adapted and introduced to students to encourage basic language acquisition and also cultural awareness and sensitivity. As part of the participatory action research process, data collection included pseudonym-protected questionnaires and written/video-taped responses to LMA language and task prompts. Responses from 43 participants were evaluated to determine the efficacy of using this system. Participants (ages 16-19) were enrolled in advanced English as an Additional Language (EAL) courses at a private, co-educational Swiss international boarding school. Final data analysis revealed that drawing attention to movement using LMA language as a stimulus creates better self-awareness and understanding/retention of key literary concepts and vocabulary but does not necessarily contribute to greater cultural sensitivity or eliminate the use of problematic (sexist, racist, or classist) language. Possibilities for future exploration and development are also explored.

Keywords: dance, English, Laban, pedagogy

Procedia PDF Downloads 147
16255 Effect of Impurities in the Chlorination Process of TiO2

Authors: Seok Hong Min, Tae Kwon Ha

Abstract:

With the increasing interest on Ti alloys, the extraction process of Ti from its typical ore, TiO2, has long been and will be important issue. As an intermediate product for the production of pigment or titanium metal sponge, tetrachloride (TiCl4) is produced by fluidized bed using high TiO2 feedstock. The purity of TiCl4 after chlorination is subjected to the quality of the titanium feedstock. Since the impurities in the TiCl4 product are reported to final products, the purification process of the crude TiCl4 is required. The purification process includes fractional distillation and chemical treatment, which depends on the nature of the impurities present and the required quality of the final product. In this study, thermodynamic analysis on the impurity effect in the chlorination process, which is the first step of extraction of Ti from TiO2, has been conducted. All thermodynamic calculations were performed using the FactSage thermodynamical software.

Keywords: rutile, titanium, chlorination process, impurities, thermodynamic calculation, FactSage

Procedia PDF Downloads 303
16254 Design of Parity-Preserving Reversible Logic Signed Array Multipliers

Authors: Mojtaba Valinataj

Abstract:

Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic

Procedia PDF Downloads 255
16253 An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures

Authors: Julien Deborde, Thomas Milcent, Stéphane Glockner, Pierre Lubin

Abstract:

A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.

Keywords: damping wave, Eulerian formulation, finite volume, fluid structure interaction, hyperelastic material

Procedia PDF Downloads 317
16252 Interpreter Scholarship Program That Improves Language Services in New South Wales: A Participatory Action Research Approach

Authors: Carly Copolov, Rema Nazha, Sahba C. Delshad, George Bisas

Abstract:

In New South Wales (NSW), Australia, we speak more than 275 languages and dialects. Interpreters play an indispensable role in our multicultural society by ensuring the people of NSW all enjoy the same opportunities. The NSW Government offers scholarships to enable people who speak in-demand and high priority languages to become eligible to be practicing interpreters. The NSW Interpreter Scholarship Program was launched in January 2019, targeting priority languages from new and emerging, as well as existing language communities. The program offers fully-funded scholarships to study at Technical and Further Education (TAFE), receive National Accreditation Authority for Translators and Interpreters (NAATI) certification, and be mentored and gain employment with the interpreter panel of Multicultural NSW. A Participatory Action Research approach was engaged to challenge the current system for people to become practicing interpreters in NSW. There were over 800 metro Sydney applications and close to 200 regional applications. Three courses were run through TAFE NSW (2 in metro Sydney and 1 in regional NSW). Thirty-nine students graduated from the program in 2019. The first metro Sydney location had 18 graduates complete the course in Assyrian, Burmese, Chaldean, Kurdish-Kurmanji, Nepali, and Tibetan. The second metro Sydney location had 9 graduates complete the course in Tongan, Kirundi, Mongolian and Italian. The regional location had 12 graduates who complete the course from new emerging language communities such as Kurdish-Kurmanji, Burmese, Zomi Chin, Hakha Chin, and Tigrinya. The findings showed that students were very positive about the program as the large majority said they were satisfied with the course content, they felt prepared for the NAATI test at the conclusion of the course, and they would definitely recommend the program to their friends. Also, 18 students from the 2019 cohort signed up to receive further mentoring by experienced interpreters. In 2020 it is anticipated that 3 courses will be run through TAFE NSW (2 in regional NSW and 1 in metro Sydney) to reflect the needs of new emerging language communities settling in regional areas. In conclusion, it has been demonstrated that the NSW Interpreter Scholarship Program improves the supply, quality, and use of language services in NSW, Australia, so that people who speak in-demand and high priority languages are ensured better access to crucial government services

Keywords: interpreting, emerging communities, scholarship program, Sydney

Procedia PDF Downloads 143
16251 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements

Authors: Sabiu Bala Muhammad, Rosli Saad

Abstract:

Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.

Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity

Procedia PDF Downloads 271
16250 Controlling the Process of a Chicken Dressing Plant through Statistical Process Control

Authors: Jasper Kevin C. Dionisio, Denise Mae M. Unsay

Abstract:

In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.

Keywords: motion and time study, process controlling, statistical process control, X-Bar R Control chart

Procedia PDF Downloads 210
16249 Determining the Width and Depths of Cut in Milling on the Basis of a Multi-Dexel Model

Authors: Jens Friedrich, Matthias A. Gebele, Armin Lechler, Alexander Verl

Abstract:

Chatter vibrations and process instabilities are the most important factors limiting the productivity of the milling process. Chatter can leads to damage of the tool, the part or the machine tool. Therefore, the estimation and prediction of the process stability is very important. The process stability depends on the spindle speed, the depth of cut and the width of cut. In milling, the process conditions are defined in the NC-program. While the spindle speed is directly coded in the NC-program, the depth and width of cut are unknown. This paper presents a new simulation based approach for the prediction of the depth and width of cut of a milling process. The prediction is based on a material removal simulation with an analytically represented tool shape and a multi-dexel approach for the work piece. The new calculation method allows the direct estimation of the depth and width of cut, which are the influencing parameters of the process stability, instead of the removed volume as existing approaches do. The knowledge can be used to predict the stability of new, unknown parts. Moreover with an additional vibration sensor, the stability lobe diagram of a milling process can be estimated and improved based on the estimated depth and width of cut.

Keywords: dexel, process stability, material removal, milling

Procedia PDF Downloads 523
16248 Nurse-Reported Perceptions of Medication Safety in Private Hospitals in Gauteng Province.

Authors: Madre Paarlber, Alwiena Blignaut

Abstract:

Background: Medication administration errors remains a global patient safety problem targeted by the WHO (World Health Organization), yet research on this matter is sparce within the South African context. Objective: The aim was to explore and describe nurses’ (medication administrators) perceptions regarding medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province of South Africa, and to determine any relationships between perceived variables concerned with medication safety (safety culture, incidences, causes, reporting of incidences, and reasons for non-reporting). Method: A quantitative research design was used through which self-administered online surveys were sent to 768 nurses (medication administrators) (n=217). The response rate was 28.26%. The survey instrument was synthesised from the Agency of Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture, the Registered Nurse Forecasting (RN4CAST) survey, a survey list prepared from a systematic review aimed at generating a comprehensive list of medication administration error causes and the Medication Administration Error Reporting Survey from Wakefield. Exploratory and confirmatory factor analyses were used to determine the validity and reliability of the survey. Descriptive and inferential statistical data analysis were used to analyse quantitative data. Relationships and correlations were identified between items, subscales and biographic data by using Spearmans’ Rank correlations, T-Tests and ANOVAs (Analysis of Variance). Nurses reported on their perceptions of medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province. Results: Units’ teamwork deemed satisfactory, punitive responses to errors accentuated. “Crisis mode” working, concerns regarding mistake recording and long working hours disclosed as impacting patient safety. Overall medication safety graded mostly positively. Work overload, high patient-nurse ratios, and inadequate staffing implicated as error-inducing. Medication administration errors were reported regularly. Fear and administrative response to errors effected non-report. Non-report of errors’ reasons was affected by non-punitive safety culture. Conclusions: Medication administration safety improvement is contingent on fostering a non-punitive safety culture within units. Anonymous medication error reporting systems and auditing nurses’ workload are recommended in the quest of improved medication safety within Gauteng Province private hospitals.

Keywords: incidence, medication administration errors, medication safety, reporting, safety culture

Procedia PDF Downloads 50
16247 Vibration Analysis and Optimization Design of Ultrasonic Horn

Authors: Kuen Ming Shu, Ren Kai Ho

Abstract:

Ultrasonic horn has the functions of amplifying amplitude and reducing resonant impedance in ultrasonic system. Its primary function is to amplify deformation or velocity during vibration and focus ultrasonic energy on the small area. It is a crucial component in design of ultrasonic vibration system. There are five common design methods for ultrasonic horns: analytical method, equivalent circuit method, equal mechanical impedance, transfer matrix method, finite element method. In addition, the general optimization design process is to change the geometric parameters to improve a single performance. Therefore, in the general optimization design process, we couldn't find the relation of parameter and objective. However, a good optimization design must be able to establish the relationship between input parameters and output parameters so that the designer can choose between parameters according to different performance objectives and obtain the results of the optimization design. In this study, an ultrasonic horn provided by Maxwide Ultrasonic co., Ltd. was used as the contrast of optimized ultrasonic horn. The ANSYS finite element analysis (FEA) software was used to simulate the distribution of the horn amplitudes and the natural frequency value. The results showed that the frequency for the simulation values and actual measurement values were similar, verifying the accuracy of the simulation values. The ANSYS DesignXplorer was used to perform Response Surface optimization, which could shows the relation of parameter and objective. Therefore, this method can be used to substitute the traditional experience method or the trial-and-error method for design to reduce material costs and design cycles.

Keywords: horn, natural frequency, response surface optimization, ultrasonic vibration

Procedia PDF Downloads 110
16246 Language Switching Errors of Bilinguals: Role of Top down and Bottom up Process

Authors: Numra Qayyum, Samina Sarwat, Noor ul Ain

Abstract:

Bilingual speakers generally can speak both languages with the same competency without mixing them intentionally and making mistakes, but sometimes errors occur in language selection. This quantitative study particularly deals with the language errors made by Urdu-English bilinguals. In this research, researchers have given special attention to the part played by bottom-up priming and top-down cognitive control in these errors. Unstable Urdu-English bilingual participants termed pictures and were prompted to shift from one language to another under the pressure of time. Different situations were given to manipulate the participants. The long and short runs trials of the same language were also given before switching to another language. The study is concluded with the findings that bilinguals made more errors when switching to the first language from their second language, and these errors are large in number, especially when a speaker is switching from L2 (second language) to L1 (first language) after a long run. When the switching is reversed, i.e., from L2 to LI, it had no effect at all. These results gave the clear responsibility of all these errors to top-down cognitive control.

Keywords: bottom up priming, language error, language switching, top down cognitive control

Procedia PDF Downloads 133
16245 Numerical Investigation of Geotextile Application in Clay Reinforcement in ABAQUS Software

Authors: Seyed Abolhasan Naeini, Eisa Aliagahei

Abstract:

Today, the use of geosynthetic materials in geotechnical activities is increasing significantly. One of the main uses of these materials is to increase the compressive strength of clay reinforced by geotextile layers. In the present study, the effect of clay reinforcement by geotextile layers in increasing the compressive strength of clay has been investigated using modeling in ABAQUS 6.11.3 software. For this purpose, the modified Drager Prager model has been chosen to simulate the stress-strain behavior of soil layers and the linear elastic model for the geotextile layer. Unreinforced samples and reinforced samples are modeled by geotextile layers (1, 2 and 3 geotextile layers) by software. In order to validate the results, an article in the same field was used and the numerical modeling results were calibrated with the laboratory results. Based on the obtained results, the software has a suitable capability for modeling and the results of the numerical model overlap with the laboratory results to a very acceptable extent, by increasing the number of geotextile layers, the error between the results of the laboratory sample and the software model increases. The highest amount of error is related to the sample reinforced with three layers of geotextile and is 7.3%.

Keywords: Abaqus, cap model, clay, geotextile layer, reinforced soil

Procedia PDF Downloads 81