Search results for: process data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35196

Search results for: process data

34056 Distribution-Free Exponentially Weighted Moving Average Control Charts for Monitoring Process Variability

Authors: Chen-Fang Tsai, Shin-Li Lu

Abstract:

Distribution-free control chart is an oncoming area from the statistical process control charts in recent years. Some researchers have developed various nonparametric control charts and investigated the detection capability of these charts. The major advantage of nonparametric control charts is that the underlying process is not specifically considered the assumption of normality or any parametric distribution. In this paper, two nonparametric exponentially weighted moving average (EWMA) control charts based on nonparametric tests, namely NE-S and NE-M control charts, are proposed for monitoring process variability. Generally, weighted moving average (GWMA) control charts are extended by utilizing design and adjustment parameters for monitoring the changes in the process variability, namely NG-S and NG-M control charts. Statistical performance is also investigated on NG-S and NG-M control charts with run rules. Moreover, sensitivity analysis is performed to show the effects of design parameters under the nonparametric NG-S and NG-M control charts.

Keywords: Distribution-free control chart, EWMA control charts, GWMA control charts

Procedia PDF Downloads 254
34055 The Role of Data Protection Officer in Managing Individual Data: Issues and Challenges

Authors: Nazura Abdul Manap, Siti Nur Farah Atiqah Salleh

Abstract:

For decades, the misuse of personal data has been a critical issue. Malaysia has accepted responsibility by implementing the Malaysian Personal Data Protection Act 2010 to secure personal data (PDPA 2010). After more than a decade, this legislation is set to be revised by the current PDPA 2023 Amendment Bill to align with the world's key personal data protection regulations, such as the European Union General Data Protection Regulations (GDPR). Among the other suggested adjustments is the Data User's appointment of a Data Protection Officer (DPO) to ensure the commercial entity's compliance with the PDPA 2010 criteria. The change is expected to be enacted in parliament fairly soon; nevertheless, based on the experience of the Personal Data Protection Department (PDPD) in implementing the Act, it is projected that there will be a slew of additional concerns associated with the DPO mandate. Consequently, the goal of this article is to highlight the issues that the DPO will encounter and how the Personal Data Protection Department should respond to this subject. The study result was produced using a qualitative technique based on an examination of the current literature. This research reveals that there are probable obstacles experienced by the DPO, and thus, there should be a definite, clear guideline in place to aid DPO in executing their tasks. It is argued that appointing a DPO is a wise measure in ensuring that the legal data security requirements are met.

Keywords: guideline, law, data protection officer, personal data

Procedia PDF Downloads 63
34054 Investigating Elements That Influence Higher Education Institutions’ Digital Maturity

Authors: Zarah M. Bello, Nathan Baddoo, Mariana Lilley, Paul Wernick

Abstract:

In this paper, we present findings from a multi-part study to evaluate candidate elements reflecting the level of digital capability maturity (DCM) in higher education and the relationship between these elements. We will use these findings to propose a model of DCM for educational institutions. We suggest that the success of learning in higher education is dependent in part on the level of maturity of digital capabilities of institutions as well as the abilities of learners and those who support the learning process. It is therefore important to have a good understanding of the elements that underpin this maturity as well as their impact and interactions in order to better exploit the benefits that technology presents to the modern learning environment and support its continued improvement. Having identified ten candidate elements of digital capability that we believe support the level of a University’s maturity in this area as well as a number of relevant stakeholder roles, we conducted two studies utilizing both quantitative and qualitative research methods. In the first of these studies, 85 electronic questionnaires were completed by various stakeholders in a UK university, with a 100% response rate. We also undertook five in-depth interviews with management stakeholders in the same university. We then utilized statistical analysis to process the survey data and conducted a textual analysis of the interview transcripts. Our findings support our initial identification of candidate elements and support our contention that these elements interact in a multidimensional manner. This multidimensional dynamic suggests that any proposal for improvement in digital capability must reflect the interdependency and cross-sectional relationship of the elements that contribute to DCM. Our results also indicate that the notion of DCM is strongly data-centric and that any proposed maturity model must reflect the role of data in driving maturity and improvement. We present these findings as a key step towards the design of an operationalisable DCM maturity model for universities.

Keywords: digital capability, elements, maturity, maturity framework, university

Procedia PDF Downloads 130
34053 A System for Visual Management of Research Resources Focusing on Accumulation of Polish Processes

Authors: H. Anzai, H. Nakayama, H. Kaminaga, Y. Morimoto, Y. Miyadera, S. Nakamura

Abstract:

Various research resources such as papers and presentation slides are handled in the process of research activities. It is extremely important for smooth progress of the research to skillfully manage those research resources and utilize them for further investigations. However, number of the research resources increases more and more. Moreover, there are the differences in usage of each kind of research resource and their accumulation styles. So, it is actually difficult to satisfactorily manage and use the accumulated research resources. Therefore, a lack of tidiness of the resources causes the problems such as an oversight of the problem to be polish. Although there have existed research projects on support for management of research resources and for sharing of know-how, almost existing systems have not been effective enough since these systems have not sufficiently considered the polish process. This paper mainly describes a system that enables the strategic management of research resources together with polish process and their practical use.

Keywords: research resource, polish process, information sharing, knowledge management, information visualization

Procedia PDF Downloads 372
34052 Model Based Development of a Processing Map for Friction Stir Welding of AA7075

Authors: Elizabeth Hoyos, Hernán Alvarez, Diana Lopez, Yesid Montoya

Abstract:

The main goal of this research relates to the modeling of FSW from a different or unusual perspective coming from mechanical engineering, particularly looking for a way to establish process windows by assessing soundness of the joints as a priority and with the added advantage of lower computational time. This paper presents the use of a previously developed model applied to specific aspects of soundness evaluation of AA7075 FSW welds. EMSO software (Environment for Modeling, Simulation, and Optimization) was used for simulation and an adapted CNC machine was used for actual welding. This model based approach showed good agreement with the experimental data, from which it is possible to set a window of operation for commercial aluminum alloy AA7075, all with low computational costs and employing simple quality indicators that can be used by non-specialized users in process modeling.

Keywords: aluminum AA7075, friction stir welding, phenomenological based semiphysical model, processing map

Procedia PDF Downloads 244
34051 Influence of Ligature Tightening on Bone Fracture Risk in Interspinous Process Surgery

Authors: Dae Kyung Choi, Won Man Park, Kyungsoo Kim, Yoon Hyuk Kim

Abstract:

The interspinous process devices have been recently used due to its advantages such as minimal invasiveness and less subsidence of the implant to the osteoporotic bone. In this paper, we have analyzed the influences of ligature tightening of several interspinous process devices using finite element analysis. Four types of interspinous process implants were inserted to the L3-4 spinal motion segment based on their surgical protocols. Inferior plane of L4 vertebra was fixed and 7.5 Nm of extension moment were applied on superior plane of L3 vertebra with 400N of compressive load along follower load direction and pretension of the ligature. The stability of the spinal unit was high enough than that of intact model. The higher value of pretension in the ligature led the decrease of dynamic stabilization effect in cases of the WallisTM, DiamTM, Viking, and Spear®. The results of present study could be used to evaluate surgical option and validate the biomechanical characteristics of the spinal implants.

Keywords: interspinous process device, bone fracture risk, lumbar spine, finite element analysis

Procedia PDF Downloads 390
34050 Communication Policies of Turkey Related to European Union

Authors: Muhammet Erbay

Abstract:

The phenomenon of communication that has been studied by different disciplines has social, political and economical aspects. The scope of communication has extended from a traditional content to the modern world which is under the control of mass media. Nowadays, thanks to globalization and technological facilities, many companies, public or international institutions take advantage of new communication technologies and overhaul their policies. European Union (EU) is one of the effective institutions in this sphere. It aims to harmonize the communication infrastructure and policies of member countries which have gone through the process of political unification. It is a significant problem for the unification of EU to have legal restrictions or critical differences in communication facilities among countries while technology stands at the center of economic and social life. Therefore, EU institutions place a particular importance to their communication policies. Besides, communication processes have a vital importance in creating a European public opinion in the process of political integration. Based on the evaluation above, the aim of this paper is to analyze the cohesion process of Turkey that tries to take an active role in EU communication policies and has on-going negotiations. This article does not only confine itself to the technical details of communication policies but also aims to evaluate socio-political dimension of the process. Therefore, a corporate review has been featured in the study and Turkey's compliance process in communication policies on European Union has been evaluated by the means of deduction method. Some problematic areas have been identified in compliance process on communication policies such as human rights and minority rights, whereas compliance process on communication infrastructure and technology proceeds effectively.

Keywords: communication policies, European Union, integration, Turkey

Procedia PDF Downloads 391
34049 Examining the Teaching and Learning Needs of Science and Mathematics Educators in South Africa

Authors: M. Shaheed Hartley

Abstract:

There has been increasing pressure on education researchers and practitioners at higher education institutions to focus on the development of South Africa’s rural and peri-urban communities and improving their quality of life. Many tertiary institutions are obliged to review their outreach interventions in schools. To ensure that the support provided to schools is still relevant, a systemic evaluation of science educator needs is central to this process. These prioritised needs will serve as guide not only for the outreach projects of tertiary institutions, but also to service providers in general so that the process of addressing educators needs become coordinated, organised and delivered in a systemic manner. This paper describes one area of a broader needs assessment exercise to collect data regarding the needs of educators in a district of 45 secondary schools in the Western Cape Province of South Africa. This research focuses on the needs and challenges faced by science educators at these schools as articulated by the relevant stakeholders. The objectives of this investigation are two-fold: (1) to create a data base that will capture the needs and challenges identified by science educators of the selected secondary schools; and (2) to develop a needs profile for each of the participating secondary schools that will serve as a strategic asset to be shared with the various service providers as part of a community of practice whose core business is to support science educators and science education at large. The data was collected by a means of a needs assessment questionnaire (NAQ) which was developed in both actual and preferred versions. An open-ended questionnaire was also administered which allowed teachers to express their views. The categories of the questionnaire were predetermined by participating researchers, educators and education department officials. Group interviews were also held with the science teachers at each of the schools. An analysis of the data revealed important trends in terms of science educator needs and identified schools that can be clustered around priority needs, logistic reasoning and educator profiles. The needs database also provides opportunity for the community of practice to strategise and coordinate their interventions.

Keywords: needs assessment, science and mathematics education, evaluation, teaching and learning, South Africa

Procedia PDF Downloads 162
34048 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel

Authors: Pankaj Chandna, Dinesh Kumar

Abstract:

The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.

Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology

Procedia PDF Downloads 527
34047 Data Collection Based on the Questionnaire Survey In-Hospital Emergencies

Authors: Nouha Mhimdi, Wahiba Ben Abdessalem Karaa, Henda Ben Ghezala

Abstract:

The methods identified in data collection are diverse: electronic media, focus group interviews and short-answer questionnaires [1]. The collection of poor-quality data resulting, for example, from poorly designed questionnaires, the absence of good translators or interpreters, and the incorrect recording of data allow conclusions to be drawn that are not supported by the data or to focus only on the average effect of the program or policy. There are several solutions to avoid or minimize the most frequent errors, including obtaining expert advice on the design or adaptation of data collection instruments; or use technologies allowing better "anonymity" in the responses [2]. In this context, we opted to collect good quality data by doing a sizeable questionnaire-based survey on hospital emergencies to improve emergency services and alleviate the problems encountered. At the level of this paper, we will present our study, and we will detail the steps followed to achieve the collection of relevant, consistent and practical data.

Keywords: data collection, survey, questionnaire, database, data analysis, hospital emergencies

Procedia PDF Downloads 90
34046 Process Assessment Model for Process Capability Determination Based on ISO/IEC 20000-1:2011

Authors: Harvard Najoan, Sarwono Sutikno, Yusep Rosmansyah

Abstract:

Most enterprises are now using information technology services as their assets to support business objectives. These kinds of services are provided by the internal service provider (inside the enterprise) or external service provider (outside enterprise). To deliver quality information technology services, the service provider (which from now on will be called ‘organization’) either internal or external, must have a standard for service management system. At present, the standard that is recognized as best practice for service management system for the organization is international standard ISO/IEC 20000:2011. The most important part of this international standard is the first part or ISO/IEC 20000-1:2011-Service Management System Requirement, because it contains 22 for organization processes as a requirement to be implemented in an organizational environment in order to build, manage and deliver quality service to the customer. Assessing organization management processes is the first step to implementing ISO/IEC 20000:2011 into the organization management processes. This assessment needs Process Assessment Model (PAM) as an assessment instrument. PAM comprises two parts: Process Reference Model (PRM) and Measurement Framework (MF). PRM is built by transforming the 22 process of ISO/IEC 20000-1:2011 and MF is based on ISO/IEC 33020. This assessment instrument was designed to assess the capability of service management process in Divisi Teknologi dan Sistem Informasi (Information Systems and Technology Division) as an internal organization of PT Pos Indonesia. The result of this assessment model can be proposed to improve the capability of service management system.

Keywords: ISO/IEC 20000-1:2011, ISO/IEC 33020:2015, process assessment, process capability, service management system

Procedia PDF Downloads 445
34045 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 143
34044 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region

Authors: Mohammad Bakhshi, Firas Al Janabi

Abstract:

High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.

Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall

Procedia PDF Downloads 189
34043 A Dirty Page Migration Method in Process of Memory Migration Based on Pre-copy Technology

Authors: Kang Zijian, Zhang Tingyu, Burra Venkata Durga Kumar

Abstract:

This article investigates the challenges in memory migration during the live migration of virtual machines. We found three challenges probably existing in pre-copy technology. One of the main challenges is the challenge of downtime migration. Decrease the downtime could promise the normal work for a virtual machine. Although pre-copy technology is greatly decreasing the downtime, we still need to shut down the machine in order to finish the last round of data transfer. This paper provides an optimization scheme for the problems existing in pro-copy technology, mainly the optimization of the dirty page migration mechanism. The typical pre-copy technology copy n-1th’s dirty pages in nth turn. However, our idea is to create a double iteration method to solve this problem.

Keywords: virtual machine, pre-copy technology, memory migration process, downtime, dirty pages migration method

Procedia PDF Downloads 114
34042 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.

Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez

Abstract:

The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.

Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process

Procedia PDF Downloads 423
34041 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties

Authors: G. Martino, F. Silva, E. Marchal

Abstract:

The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.

Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization

Procedia PDF Downloads 111
34040 Federated Learning in Healthcare

Authors: Ananya Gangavarapu

Abstract:

Convolutional Neural Networks (CNN) based models are providing diagnostic capabilities on par with the medical specialists in many specialty areas. However, collecting the medical data for training purposes is very challenging because of the increased regulations around data collections and privacy concerns around personal health data. The gathering of the data becomes even more difficult if the capture devices are edge-based mobile devices (like smartphones) with feeble wireless connectivity in rural/remote areas. In this paper, I would like to highlight Federated Learning approach to mitigate data privacy and security issues.

Keywords: deep learning in healthcare, data privacy, federated learning, training in distributed environment

Procedia PDF Downloads 122
34039 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter

Authors: Jisun Lee, Jay Hyoun Kwon

Abstract:

As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.

Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain

Procedia PDF Downloads 328
34038 Application of Tocopherol as Antioxidant to Reduce Decomposition Process on Palm Oil Biodiesel

Authors: Supriyono, Sumardiyono, Rendy J. Pramono

Abstract:

Biodiesel is one of the alternative fuels promising for substituting petrodiesel as energy source which has an advantage as it is sustainable and eco-friendly. Due to the raw material that tends to decompose during storage, biodiesel also has the same characteristic that tends to decompose during storage. Biodiesel decomposition will form higher acid value as the result of oxidation to double bond on a fatty acid compound on biodiesel. Thus, free fatty acid value could be used to evaluate degradation of biodiesel due to the oxidation process. High free fatty acid on biodiesel could impact on the engine performance. Decomposition of biodiesel due to oxidation reaction could prevent by introducing a small amount of antioxidant. The origin of raw materials and the process for producing biodiesel will determine the effectiveness of antioxidant. Biodiesel made from high free fatty acid (FFA) crude palm oil (CPO) by using two steps esterification is vulnerable to oxidation process which is resulted in increasing on the FFA value. Tocopherol also known as vitamin E is one of the antioxidant that could improve the stability of biodiesel due to decomposition by the oxidation process. Tocopherol 0.5% concentration on palm oil biodiesel could reduce 13% of increasing FFA under temperature 80 °C and exposing time 180 minute.

Keywords: antioxidant, palm oil biodiesel, decomposition, oxidation, tocopherol

Procedia PDF Downloads 335
34037 Clinch Process Simulation Using Diffuse Elements

Authors: Benzegaou Ali, Brani Benabderrahmane

Abstract:

This work describes a numerical study of the TOX–clinching process using diffuse elements. A computer code baptized SEMA "Static Explicit Method Analysis" is developed to simulate the clinch joining process. The FE code is based on an Updated Lagrangian scheme. The used resolution method is based on an explicit static approach. The integration of the elasto-plastic behavior law is realized with an algorithm of Simo and Taylor. The tools are represented by plane facets.

Keywords: diffuse elements, numerical simulation, clinching, contact, large deformation

Procedia PDF Downloads 346
34036 Challenges in Employment and Adjustment of Academic Expatriates Based in Higher Education Institutions in the KwaZulu-Natal Province, South Africa

Authors: Thulile Ndou

Abstract:

The purpose of this study was to examine the challenges encountered in the mediation of attracting and recruiting academic expatriates who in turn encounter their own obstacles in adjusting into and settling in their host country, host academic institutions and host communities. The none-existence of literature on attraction, placement and management of academic expatriates in the South African context has been acknowledged. Moreover, Higher Education Institutions in South Africa have voiced concerns relating to delayed and prolonged recruitment and selection processes experienced in the employment process of academic expatriates. Once employed, academic expatriates should be supported and acquainted with the surroundings, the local communities as well as be assisted to establish working relations with colleagues in order to facilitate their adjustment and integration process. Hence, an employer should play a critical role in facilitating the adjustment of academic expatriates. This mixed methods study was located in four Higher Education Institutions based in the KwaZulu-Natal province, in South Africa. The explanatory sequential design approach was deployed in the study. The merits of this approach were chiefly that it employed both the quantitative and qualitative techniques of inquiry. Therefore, the study examined and interrogated its subject from a multiplicity of quantitative and qualitative vantage points, yielding a much more enriched and enriching illumination. Mixing the strengths of both the quantitative and the qualitative techniques delivered much more durable articulation and understanding of the subject. A 5-point Likert scale questionnaire was used to collect quantitative data relating to interaction adjustment, general adjustment and work adjustment from academic expatriates. One hundred and forty two (142) academic expatriates participated in the quantitative study. Qualitative data relating to employment process and support offered to academic expatriates was collected through a structured questionnaire and semi-structured interviews. A total of 48 respondents; including, line managers, human resources practitioners, and academic expatriates participated in the qualitative study. The Independent T-test, ANOVA and Descriptive Statistics were performed to analyse, interpret and make meaning of quantitative data and thematic analysis was used to analyse qualitative data. The qualitative results revealed that academic talent is sourced from outside the borders of the country because of the academic skills shortage in almost all academic disciplines especially in the disciplines associated with Science, Engineering and Accounting. However, delays in work permit application process made it difficult to finalise the recruitment and selection process on time. Furthermore, the quantitative results revealed that academic expatriates experience general and interaction adjustment challenges associated with the use of local language and understanding of local culture. However, female academic expatriates were found to be better adjusted in the two areas as compared to male academic expatriates. Moreover, significant mean differences were found between institutions suggesting that academic expatriates based in rural areas experienced adjustment challenges differently from the academic expatriates based in urban areas. The study gestured to the need for policy revisions in the area of immigration, human resources and academic administration.

Keywords: academic expatriates, recruitment and selection, interaction and general adjustment, work adjustment

Procedia PDF Downloads 282
34035 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 59
34034 Building Transparent Supply Chains through Digital Tracing

Authors: Penina Orenstein

Abstract:

In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.

Keywords: data mining, supply chain, empirical research, data mapping

Procedia PDF Downloads 156
34033 The Utilization of Big Data in Knowledge Management Creation

Authors: Daniel Brian Thompson, Subarmaniam Kannan

Abstract:

The huge weightage of knowledge in this world and within the repository of organizations has already reached immense capacity and is constantly increasing as time goes by. To accommodate these constraints, Big Data implementation and algorithms are utilized to obtain new or enhanced knowledge for decision-making. With the transition from data to knowledge provides the transformational changes which will provide tangible benefits to the individual implementing these practices. Today, various organization would derive knowledge from observations and intuitions where this information or data will be translated into best practices for knowledge acquisition, generation and sharing. Through the widespread usage of Big Data, the main intention is to provide information that has been cleaned and analyzed to nurture tangible insights for an organization to apply to their knowledge-creation practices based on facts and figures. The translation of data into knowledge will generate value for an organization to make decisive decisions to proceed with the transition of best practices. Without a strong foundation of knowledge and Big Data, businesses are not able to grow and be enhanced within the competitive environment.

Keywords: big data, knowledge management, data driven, knowledge creation

Procedia PDF Downloads 93
34032 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 97
34031 Utilization of Cloud-Based Learning Platform for the Enhancement of IT Onboarding System

Authors: Christian Luarca

Abstract:

The study aims to define the efficiency of e-Trainings by the use of cloud platform as part of the onboarding process for IT support engineers. Traditional lecture based trainings involves human resource to guide and assist new hires as part of onboarding which takes time and effort. The use of electronic medium as a platform for training provides a two-way basic communication that can be done in a repetitive manner. The study focuses on determining the most efficient manner of learning the basic knowledge on IT support in the shortest time possible. This was determined by conducting the same set of knowledge transfer categories in two different approaches, one being the e-Training and the other using the traditional method. Performance assessment will be done by the use of Service Tracker Assessment (STA) Tool and Service Manager. Data gathered from this ongoing study will promote the utilization of e-Trainings in the IT onboarding process.

Keywords: cloud platform, e-Training, efficiency, onboarding

Procedia PDF Downloads 138
34030 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 307
34029 Survey on Data Security Issues Through Cloud Computing Amongst Sme’s in Nairobi County, Kenya

Authors: Masese Chuma Benard, Martin Onsiro Ronald

Abstract:

Businesses have been using cloud computing more frequently recently because they wish to take advantage of its advantages. However, employing cloud computing also introduces new security concerns, particularly with regard to data security, potential risks and weaknesses that could be exploited by attackers, and various tactics and strategies that could be used to lessen these risks. This study examines data security issues on cloud computing amongst sme’s in Nairobi county, Kenya. The study used the sample size of 48, the research approach was mixed methods, The findings show that data owner has no control over the cloud merchant's data management procedures, there is no way to ensure that data is handled legally. This implies that you will lose control over the data stored in the cloud. Data and information stored in the cloud may face a range of availability issues due to internet outages; this can represent a significant risk to data kept in shared clouds. Integrity, availability, and secrecy are all mentioned.

Keywords: data security, cloud computing, information, information security, small and medium-sized firms (SMEs)

Procedia PDF Downloads 65
34028 Cloud Design for Storing Large Amount of Data

Authors: M. Strémy, P. Závacký, P. Cuninka, M. Juhás

Abstract:

Main goal of this paper is to introduce our design of private cloud for storing large amount of data, especially pictures, and to provide good technological backend for data analysis based on parallel processing and business intelligence. We have tested hypervisors, cloud management tools, storage for storing all data and Hadoop to provide data analysis on unstructured data. Providing high availability, virtual network management, logical separation of projects and also rapid deployment of physical servers to our environment was also needed.

Keywords: cloud, glusterfs, hadoop, juju, kvm, maas, openstack, virtualization

Procedia PDF Downloads 339
34027 Exergy Analysis of Reverse Osmosis for Potable Water and Land Irrigation

Authors: M. Sarai Atab, A. Smallbone, A. P. Roskilly

Abstract:

A thermodynamic study is performed on the Reverse Osmosis (RO) desalination process for brackish water. The detailed RO model of thermodynamics properties with and without an energy recovery device was built in Simulink/MATLAB and validated against reported measurement data. The efficiency of desalination plants can be estimated by both the first and second laws of thermodynamics. While the first law focuses on the quantity of energy, the second law analysis (i.e. exergy analysis) introduces quality. This paper used the Main Outfall Drain in Iraq as a case study to conduct energy and exergy analysis of RO process. The result shows that it is feasible to use energy recovery method for reverse osmosis with salinity less than 15000 ppm as the exergy efficiency increases twice. Moreover, this analysis shows that the highest exergy destruction occurs in the rejected water and lowest occurs in the permeate flow rate accounting 37% for 4.3% respectively.

Keywords: brackish water, exergy, irrigation, reverse osmosis (RO)

Procedia PDF Downloads 160