Search results for: deactivation process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15231

Search results for: deactivation process

11961 Case Scenario Simulation concerning Eventual Ship Sourced Oil Spill, Expansion and Response Process in Istanbul Strait

Authors: Cihat Aşan

Abstract:

Istanbul Strait is a crucial and narrow waterway, not only having a role in linking two continents but also has a crossover mission for the petroleum, which is the biggest energy resource, between its supply and demand sources. Besides its substantial features, sensitivities like around 18 million populations in surroundings, military facilities, ports, oil lay down areas etc. also brings the high risk to use of Istanbul Strait. Based on the statistics of Turkish Ministry of Transportation, Maritime and Communication, although the number of vessel passage in Istanbul Strait is declining, tonnage of hazardous and flammable cargo like oil and chemical transportation is increasing and subsequently the risk of oil pollution, loss of life and property is also rising. Based on the mentioned above; it is crucial to be prepared for the initial and subsequent response to eventual ship sourced oil spill which may cause to block the Strait for an unbearable duration. In this study; preconditioned Istanbul Strait sensitive areas studies has been taken into account and possible oil spill scenario is loaded to PISCES 2 (Potential Incident Simulation Control and Evaluation System) decision support system for the determined specific sea area. Consequences of the simulation like oil expanding process, required number and types of assets to response, had in hand and evaluated.

Keywords: Istanbul strait, oil spill, PISCES simulator, initial response

Procedia PDF Downloads 343
11960 Innovation in PhD Training in the Interdisciplinary Research Institute

Authors: B. Shaw, K. Doherty

Abstract:

The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.

Keywords: interdisciplinary, method, research student, training

Procedia PDF Downloads 206
11959 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus

Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti

Abstract:

Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.

Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel

Procedia PDF Downloads 195
11958 Development of Elementary Literacy in the Czech Republic

Authors: Iva Košek Bartošová

Abstract:

There is great attention being paid in the field of development of first reading, thus early literacy skills in the Czech Republic. Yet inconclusive results of PISA and PIRLS force us to think over the teacher´s work, his/her roles in the education process and methods and forms used in lessons. There is also a significant importance to monitor the family environment and the pupil, themselves. The aim of the publishing output is to focus on one side dealing with methods of practicing reading technique and their results in the process of comprehension. In the first part of the contribution there are the goals of development of reading literacy and the methods used in reading practice in some EU countries and a follow-up comparison of research implemented by the help of modern technology of an eye tracker device in the year 2015 and a research conducted at the Institute of Education and Psychological Counselling of the Czech Republic in the year 2011/12. These are the results of a diagnostic test of reading in first classes of primary schools, taught by the genetic method and analytic-synthetic method. The results show that in the first stage of practice there are no statistically significant differences between any researched subjects taught by different methods of reading practice (with the use of several diagnostic texts focused on reading technique and its comprehension). Different results are shown at the end of Grade One and during Grade Two of primary school.

Keywords: elementary literacy, eye tracker device, diagnostic reading tests, reading teaching method

Procedia PDF Downloads 187
11957 Single Pole-To-Earth Fault Detection and Location on the Tehran Railway System Using ICA and PSO Trained Neural Network

Authors: Masoud Safarishaal

Abstract:

Detecting the location of pole-to-earth faults is essential for the safe operation of the electrical system of the railroad. This paper aims to use a combination of evolutionary algorithms and neural networks to increase the accuracy of single pole-to-earth fault detection and location on the Tehran railroad power supply system. As a result, the Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to train the neural network to improve the accuracy and convergence of the learning process. Due to the system's nonlinearity, fault detection is an ideal application for the proposed method, where the 600 Hz harmonic ripple method is used in this paper for fault detection. The substations were simulated by considering various situations in feeding the circuit, the transformer, and typical Tehran metro parameters that have developed the silicon rectifier. Required data for the network learning process has been gathered from simulation results. The 600Hz component value will change with the change of the location of a single pole to the earth's fault. Therefore, 600Hz components are used as inputs of the neural network when fault location is the output of the network system. The simulation results show that the proposed methods can accurately predict the fault location.

Keywords: single pole-to-pole fault, Tehran railway, ICA, PSO, artificial neural network

Procedia PDF Downloads 123
11956 TQM Framework Using Notable Authors Comparative

Authors: Redha M. Elhuni

Abstract:

This paper presents an analysis of the essential characteristics of the TQM philosophy by comparing the work of five notable authors in the field. A framework is produced which gather the identified TQM enablers under the well-known operations management dimensions of process, business and people. These enablers are linked with sustainable development via balance scorecard type economic and non-economic measures. In order to capture a picture of Libyan Company’s efforts to implement the TQM, a questionnaire survey is designed and implemented. Results of the survey are presented showing the main differentiating factors between the sample companies, and a way of assessing the difference between the theoretical underpinning and the practitioners’ undertakings. Survey results indicate that companies are experiencing much difficulty in translating TQM theory into practice. Only a few companies have successfully adopted a holistic approach to TQM philosophy, and most of these put relatively high emphasis on hard elements compared with soft issues of TQM. However, where companies can realize the economic outputs, non- economic benefits such as workflow management, skills development and team learning are not realized. In addition, overall, non-economic measures have secured low weightings compared with the economic measures. We believe that the framework presented in this paper can help a company to concentrate its TQM implementation efforts in terms of process, system and people management dimensions.

Keywords: TQM, balance scorecard, EFQM excellence model, oil sector, Libya

Procedia PDF Downloads 405
11955 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications

Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong

Abstract:

This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.

Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space

Procedia PDF Downloads 260
11954 A Tuning Method for Microwave Filter via Complex Neural Network and Improved Space Mapping

Authors: Shengbiao Wu, Weihua Cao, Min Wu, Can Liu

Abstract:

This paper presents an intelligent tuning method of microwave filter based on complex neural network and improved space mapping. The tuning process consists of two stages: the initial tuning and the fine tuning. At the beginning of the tuning, the return loss of the filter is transferred to the passband via the error of phase. During the fine tuning, the phase shift caused by the transmission line and the higher order mode is removed by the curve fitting. Then, an Cauchy method based on the admittance parameter (Y-parameter) is used to extract the coupling matrix. The influence of the resonant cavity loss is eliminated during the parameter extraction process. By using processed data pairs (the amount of screw variation and the variation of the coupling matrix), a tuning model is established by the complex neural network. In view of the improved space mapping algorithm, the mapping relationship between the actual model and the ideal model is established, and the amplitude and direction of the tuning is constantly updated. Finally, the tuning experiment of the eight order coaxial cavity filter shows that the proposed method has a good effect in tuning time and tuning precision.

Keywords: microwave filter, scattering parameter, coupling matrix, intelligent tuning

Procedia PDF Downloads 311
11953 Interrogating Democracy and Development in Africa: A Case Study of Nigeria

Authors: Yusuf Bala

Abstract:

The last decades of the 20th Centaury witnessed renewed hope about the birth of democracy and development in Africa the interface between democracy and development in Africa has long engaged the sustained interest of scholars and researchers across Africa. The process was actively supported by all segment of society, labour students market women, rural dweller who saw in it, the prospects of reversing the trend of political despair and in disillusionment that hither to characterized political life in Africa. The political tyranny and dictatorship while having it own clientele and beneficiaries had negative and suffocating effect on the majority of the people. The democratic aspiration of the Africa people is not only confined to the Arena of political Democracy of election and granting of civil and political rights, but it involves the demand for economic empowerment better living standards of the people and adequate social welfare indeed, for the majority of the people democracy is meaningful only when it delivers socio-economic goods. However, democracy and development have generated enormous interest no conclusive evidence seems to be shared in Africa. In the course of this research emphasis shall be made on certain issues, such as issues of corruption in democracy in Africa, ethnic conflict and democracy in Africa contribution of women to democratic practice and women participation in political arena, is still very low, democratization process and industrial relation in Africa as factor that hinder the development of Democracy in Africa, a case study of Nigeria.

Keywords: democracy, development, dictatorship, conflict, ethnicity

Procedia PDF Downloads 318
11952 Storage Tank Overfill Protection in Compliance with Functional Safety Standard: IEC 61511

Authors: Hassan Alsada

Abstract:

Tank overfill accidents are major concerns for industries handling large volumes of hydrocarbons. Buncefield, Jaipur, Puerto Rico, and West Virginia are just a few accidents with catastrophic consequences. Thus, it is very important for any industry to take the right safety measures for overfill prevention. Moreover, one of the main causative factors in the overfill accidents was inadequate risk analysis and, subsequently, inadequate design. This study aims to provide a full assessment in accordance with the Functional safety standard: “IEC 615 11 – Safety instrumented systems for the process industry” to the tank overfill scenario according to the standard’s Safety Life Cycle (SLC), which includes: the analysis phase, the implementation phase, and the operation phase. The paper discusses in depth the tank overfills Independent Protection Layers (IPLs) with systematic analysis to avoid the safety risks of under-design and the financial risk of facility overdesign. The result shows a clear and systematic assessment in compliance with the standards that can help to assist existing tank overfilling setup or a guide to support designing new storage facilities overfill protection.

Keywords: IEC 61511, PHA, LOPA, process safety, safety, health, environment, safety instrumented systems, safety instrumented function, functional safety, safety life cycle

Procedia PDF Downloads 90
11951 Students’ Perception of Guided Imagery Improving Anxiety before Examination: A Qualitative Study

Authors: Wong Ka Fai

Abstract:

Introduction: Many students are worried before an examination; that is a common picture worldwide. Health problems from stress before examination were insomnia, tiredness, isolation, stomach upset, and anxiety. Nursing students experienced high stress from the examination. Guided imagery is a healing process of applying imagination to help the body heal, survive, or live well. It can bring about significant physiological and biochemical changes, which can trigger the recovery process. A study of nursing students improving their anxiety before examination with guided imagery was proposed. Aim: The aim of this study was to explore the outcome of guided imagery on nursing students’ anxiety before examination in Hong Kong. Method: The qualitative study method was used. 16 first-year students studying nursing programme were invited to practice guided imagery to improve their anxiety before the examination period. One week before the examination, the semi-structured interviews with these students were carried out by the researcher. Result: From the content analysis of interview data, these nursing students showed considerable similarities in their anxiety perception. Nursing students’ perceived improved anxiety was evidenced by a reduction of stressful feelings, improved physical health, satisfaction with daily activities, and enhanced skills for solving problems and upcoming situations. Conclusion: This study indicated that guided imagery can be used as an alternative measure to improve students’ anxiety and psychological problems.

Keywords: nursing students, perception, anxiety, guided imagery

Procedia PDF Downloads 76
11950 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
11949 Robotics and Embedded Systems Applied to the Buried Pipeline Inspection

Authors: Robson C. Santos, Julio C. P. Ribeiro, Iorran M. de Castro, Luan C. F. Rodrigues, Sandro R. L. Silva, Diego M. Quesada

Abstract:

The work aims to develop a robot in the form of autonomous vehicle to detect, inspection and mapping of underground pipelines through the ATmega328 Arduino platform. Hardware prototyping very similar to C / C ++ language that facilitates its use in robotics open source, resembles PLC used in large industrial processes. The robot will traverse the surface independently of direct human action, in order to automate the process of detecting buried pipes, guided by electromagnetic induction. The induction comes from coils that sends the signal to the Arduino microcontroller contained in that will make the difference in intensity and the treatment of the information, then this determines actions to electrical components such as relays and motors, allowing the prototype to move on the surface and getting the necessary information. The robot was developed by electrical and electronic assemblies that allowed test your application. The assembly is made up of metal detector coils, circuit boards and microprocessor, which interconnected circuits previously developed can determine, process control and mechanical actions for a robot (autonomous car) that will make the detection and mapping of buried pipelines plates.

Keywords: robotic, metal detector, embedded system, pipeline inspection

Procedia PDF Downloads 614
11948 A Coevolutionary Framework of Business-IT Alignment through the Lens of Enterprise Architecture

Authors: Mengmeng Zhang, Honghui Chen, Kalle Lyytinen

Abstract:

The major challenges for sustainable business-IT alignment (BITA) in a company root in its volatile external competitive environment, increasingly complex internal relationships, and subversive IT roles. Failure to adequately address BITA results in wasting organizational resources, losing competitive advantages, and failing to produce adequate returns on investments. The coevolution is more suitable to describe the dynamic relationships of business and IT and has received certain attention in recent years. Multiple mechanisms for achieving BITC (e.g., sharing domain knowledge, modular design) were obtained. However, instead of a complete managing process, BITC achievement is still hard to operate in practice. This study emphasizes what the BITC management process looks like and how to execute this coevolution step-by-step. A practical coevolutionary framework that combines the enterprise architecture (EA) method with misalignment analysis is proposed in this paper. It contains steps of EA design, misalignment detection, misalignment correction, and EA management /misalignment prevention. The step of misalignment correction is especially discussed at length. This study also evaluates the proposed framework by comparing the characteristics, principles, and approaches of coevolution in the literature.

Keywords: business-IT alignment, business-IT coevolution, enterprise architecture, misalignment analysis, misalignment correction

Procedia PDF Downloads 150
11947 Improved Non-Ideal Effects in AlGaN/GaN-Based Ion-Sensitive Field-Effect Transistors

Authors: Wei-Chou Hsu, Ching-Sung Lee, Han-Yin Liu

Abstract:

This work uses H2O2 oxidation technique to improve the pH sensitivity of the AlGaN/GaN-based ion-sensitive field-effect transistors (ISFETs). 10-nm-thick Al2O3 was grown on the surface of the AlGaN. It was found that the pH sensitivity was improved from 41.6 mV/pH to 55.2 mV/pH. Since the H2O2-grown Al2O3 was served as a passivation layer and the problem of Fermi-level pinning was suppressed for the ISFET with the H2O2 oxidation process. Hysteresis effect in the ISFET with the H2O2 treatment also became insignificant. The hysteresis effect was observed by dipping the ISFETs into different pH value solutions and comparing the voltage difference between the initial and final conditions. The hysteresis voltage (Vhys) of the ISFET with the H2O2 oxidation process was improved from 8.7 mV to 4.8 mV. The hysteresis effect is related to the buried binding sites which are related to the material defects like threading dislocations in the AlGaN/GaN heterostructure which was grown by the hetero-epitaxy technique. The H2O2-grown Al2O3 passivate these material defects and the Al2O3 has less material defects. The long-term stability of the ISFET is estimated by the drift effect measurement. The drift measurement was conducted by dipping the ISFETs into a specific pH value solution for 12 hours and the ISFETs were operating at a specific quiescent point. The drift rate is estimated by the drift voltage divided by the total measuring time. It was found that the drift rate of the ISFET was improved from 10.1 mV/hour to 1.91 mV/hour in the pH 7 solution, from 14.06 mV/hour to 6.38 mV/pH in the pH 2 solution, and from 12.8 mV/hour to 5.48 mV/hour in the pH 12 solution. The drift effect results from the capacitance variation in the electric double layer. The H2O2-grown Al2O3 provides an additional capacitance connection in series with the electric double layer. Therefore, the capacitance variation of the electric double layer became insignificant. Generally, the H2O2 oxidation process is a simple, fast, and cost-effective method for the AlGaN/GaN-based ISFET. Furthermore, the performance of the AlGaN/GaN ISFET was improved effectively and the non-ideal effects were suppressed.

Keywords: AlGaN/GaN, Al2O3, hysteresis effect, drift effect, reliability, passivation, pH sensors

Procedia PDF Downloads 325
11946 Urban Sustainable Development Based on Habitat Quality Evolution: A Case Study in Chongqing, China

Authors: Jing Ren, Kun Wu

Abstract:

Over the last decade or so, China's urbanization has shown a rapid development trend. At the same time, it has also had a great negative impact on the habitat quality. Therefore, it is of great significance to study the impact of land use change on the level of habitat quality in mountain cities for sustainable urban development. This paper analyzed the spatial and temporal land use changes in Chongqing from 2010 to 2020 using ArcGIS 10.6, as well as the evolutionary trend of habitat quality during this period based on the InVEST 3.13.0, to obtain the impact of land use changes on habitat quality. The results showed that the habitat quality in the western part of Chongqing decreased significantly between 2010 and 2020, while the northeastern and southeastern parts remained stable. The main reason for this is the continuous expansion of urban construction land in the western area, which leads to serious habitat fragmentation and the continuous decline of habitat quality. while, in the northeast and southeast areas, due to the greater emphasis on ecological priority and urban-rural coordination in the development process, land use change is characterized by a benign transfer, which maintains the urbanization process while maintaining the coordinated development of habitat quality. This study can provide theoretical support for the sustainable development of mountain cities.

Keywords: mountain cities, ecological environment, habitat quality, sustainable development

Procedia PDF Downloads 84
11945 Decreased Non-Communicable Disease by Surveillance, Control, Prevention Systems, and Community Engagement Process in Phayao, Thailand

Authors: Vichai Tienthavorn

Abstract:

Background: Recently, the patients of non-communicable diseases (NCDs) are increasing in Thailand; especially hypertension and diabetes. Hypertension and Diabetes patients were found to be of 3.7 million in 2008. The varieties of human behaviors have been extensively changed in health. Hence, Thai Government has a policy to reduce NCDs. Generally, primary care plays an important role in treatment using medical process. However, NCDs patients have not been decreased. Objectives: This study not only reduce the patient and mortality rate but also increase the quality of life, could apply in different areas and propose to be the national policy, effectively for a long term operation. Methods: Here we report that primary health care (PHC), which is a primary process to screening, rapidly seek the person's risk. The screening tool of the study was Vichai's 7 color balls model, the medical education tool to transfer knowledge from student health team to community through health volunteers, creating community engagement in terms of social participation. It was found that people in community were realized in their health and they can evaluate the level of risk using this model. Results: Projects implementation (2015) in Nong Lom Health Center in Phayao (target group 15-65 years, 2529); screening hypertension coveraged 99.01%, risk group (light green) was decreased to normal group (white) from 1806 to 1893, significant severe patient (red) was decreased to moderate (orange) from 10 to 5. Health Program in behaving change with best practice of 3Es (Eating, Exercise, Emotion) and 3Rs (Reducing tobacco, alcohol, obesity) were applied in risk group; and encourage strictly medication, investigation in severe patient (red). Conclusion: This is the first demonstration of knowledge transfer to community engagement by student, which is the sustainable education in PHC.

Keywords: non-communicable disease, surveillance control and prevention systems, community engagement, primary health care

Procedia PDF Downloads 250
11944 Reduction in Hot Metal Silicon through Statistical Analysis at G-Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, Santanu Mallick, Abhiram Jha, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

The quality of hot metal at any blast furnace is judged by the silicon content in it. Lower hot metal silicon not only enhances process efficiency at steel melting shops but also reduces hot metal costs. The Hot metal produced at G-Blast furnace Tata Steel Jamshedpur has a significantly higher Si content than Benchmark Blast furnaces. The higher content of hot metal Si is mainly due to inferior raw material quality than those used in benchmark blast furnaces. With minimum control over raw material quality, the only option left to control hot metal Si is via optimizing the furnace parameters. Therefore, in order to identify the levers to reduce hot metal Si, Data mining was carried out, and multiple regression models were developed. The statistical analysis revealed that Slag B3{(CaO+MgO)/SiO2}, Slag Alumina and Hot metal temperature are key controllable parameters affecting hot metal silicon. Contour Plots were used to determine the optimum range of levels identified through statistical analysis. A trial plan was formulated to operate relevant parameters, at G blast furnace, in the identified range to reduce hot metal silicon. This paper details out the process followed and subsequent reduction in hot metal silicon by 15% at G blast furnace.

Keywords: blast furnace, optimization, silicon, statistical tools

Procedia PDF Downloads 223
11943 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 241
11942 Teacher’s Role in the Process of Identity Construction in Language Learners

Authors: Gaston Bacquet

Abstract:

The purpose of this research is to explore how language and culture shape a learner’s identity as they immerse themselves in the world of second language learning and how teachers can assist in the process of identity construction within a classroom setting. The study will be conducted as an in-classroom ethnography, using a qualitative methods approach and analyzing students’ experiences as language learners, their degree of investment, inclusion/exclusion, and attitudes, both towards themselves and their social context; the research question the study will attempt to answer is: What kind of pedagogical interventions are needed to help language learners in the process of identity construction so they can offset unequal conditions of power and gain further social inclusion? The following methods will be used for data collection: i) Questionnaires to investigate learners’ attitudes and feelings in different areas divided into four strands: themselves, their classroom, learning English and their social context. ii) Participant observations, conducted in a naturalistic manner. iii) Journals, which will be used in two different ways: on the one hand, learners will keep semi-structured, solicited diaries to record specific events as requested by the researcher (event-contingent). On the other, the researcher will keep his journal to maintain a record of events and situations as they happen to reduce the risk of inaccuracies. iv) Person-centered interviews, which will be conducted at the end of the study to unearth data that might have been occluded or be unclear from the methods above. The interviews will aim at gaining further data on experiences, behaviors, values, opinions, feelings, knowledge and sensory, background and demographic information. This research seeks to understand issues of socio-cultural identities and thus make a significant contribution to knowledge in this area by investigating the type of pedagogical interventions needed to assist language learners in the process of identity construction to achieve further social inclusion. It will also have applied relevance for those working with diverse student groups, especially taking our present social context into consideration: we live in a highly mobile world, with migrants relocating to wealthier, more developed countries that pose their own particular set of challenges for these communities. This point is relevant because an individual’s insight and understanding of their own identity shape their relationship with the world and their ability to continue constructing this relationship. At the same time, because a relationship is influenced by power, the goal of this study is to help learners feel and become more empowered by increasing their linguistic capital, which we hope might result in a greater ability to integrate themselves socially. Exactly how this help will be provided will vary as data is unearthed through questionnaires, focus groups and the actual participant observations being carried out.

Keywords: identity construction, second-language learning, investment, second-language culture, social inclusion

Procedia PDF Downloads 103
11941 Extractive Bioconversion of Polyhydroxyalkanoates (PHAs) from Ralstonia Eutropha Via Aqueous Two-Phase System-An Integrated Approach

Authors: Y. K. Leong, J. C. W. Lan, H. S. Loh, P. L. Show

Abstract:

Being biodegradable, non-toxic, renewable and have similar or better properties as commercial plastics, polyhydroxy alkanoates (PHAs) can be a potential game changer in the polymer industry. PHAs are the biodegradable polymer produced by bacteria, which are in interest as a sustainable alternative to petrochemical-derived plastics; however, its commercial value has significantly limited by high production and recovery cost of PHA. Aqueous two-phase system (ATPS) offers different chemical and physical environments, which contains about 80-90% water delivers an excellent environment for partitioning of cells, cell organelles and biologically active substances. Extractive bioconversion via ATPS allows the integration of PHA upstream fermentation and downstream purification process, which reduces production steps and time, thus lead to cost reduction. The ability of Ralstonia eutropha to grow under different ATPS conditions was investigated for its potential to be used in a bioconversion system. Changes in tie-line length (TLL) and a volume ratio (Vr) were shown to have an effect on PHA partition coefficient. High PHA recovery yield of 65% with a relatively high purity of 73% was obtained in PEG 6000/Sodium sulphate system with 42.6 wt/wt % TLL and 1.25 Vr. Extractive bioconversion via ATPS is an attractive approach for the combination of PHA production and recovery process.

Keywords: aqueous two-phase system, extractive bioconversion, polyhydroxy alkanoates, purification

Procedia PDF Downloads 310
11940 Towards Computational Fluid Dynamics Based Methodology to Accelerate Bioprocess Scale Up and Scale Down

Authors: Vishal Kumar Singh

Abstract:

Bioprocess development is a time-constrained activity aimed at harnessing the full potential of culture performance in an ambience that is not natural to cells. Even with the use of chemically defined media and feeds, a significant amount of time is devoted in identifying the apt operating parameters. In addition, the scale-up of these processes is often accompanied by loss of antibody titer and product quality, which further delays the commercialization of the drug product. In such a scenario, the investigation of this disparity of culture performance is done by further experimentation at a smaller scale that is representative of at-scale production bioreactors. These scale-down model developments are also time-intensive. In this study, a computation fluid dynamics-based multi-objective scaling approach has been illustrated to speed up the process transfer. For the implementation of this approach, a transient multiphase water-air system has been studied in Ansys CFX to visualize the air bubble distribution and volumetric mass transfer coefficient (kLa) profiles, followed by the design of experiment based parametric optimization approach to define the operational space. The proposed approach is completely in silico and requires minimum experimentation, thereby rendering a high throughput to the overall process development.

Keywords: bioprocess development, scale up, scale down, computation fluid dynamics, multi-objective, Ansys CFX, design of experiment

Procedia PDF Downloads 82
11939 Analytical Formulae for the Approach Velocity Head Coefficient

Authors: Abdulrahman Abdulrahman

Abstract:

Critical depth meters, such as abroad crested weir, Venture Flume and combined control flume are standard devices for measuring flow in open channels. The discharge relation for these devices cannot be solved directly, but it needs iteration process to account for the approach velocity head. In this paper, analytical solution was developed to calculate the discharge in a combined critical depth-meter namely, a hump combined with lateral contraction in rectangular channel with subcritical approach flow including energy losses. Also analytical formulae were derived for approach velocity head coefficient for different types of critical depth meters. The solution was derived by solving a standard cubic equation considering energy loss on the base of trigonometric identity. The advantage of this technique is to avoid iteration process adopted in measuring flow by these devices. Numerical examples are chosen for demonstration of the proposed solution.

Keywords: broad crested weir, combined control meter, control structures, critical flow, discharge measurement, flow control, hydraulic engineering, hydraulic structures, open channel flow

Procedia PDF Downloads 274
11938 Investigating the Application of Composting for Phosphorous Recovery from Alum Precipitated and Ferric Precipitated Sludge

Authors: Saba Vahedi, Qiuyan Yuan

Abstract:

A vast majority of small municipalities and First Nations communities in Manitoba operate facultative or aerated lagoons for wastewater treatment, and most of them use Ferric Chloride (FeCl3) or alum (usually in the form of Al2(SO4)3 ·18H2O) as coagulant for phosphorous removal. The insoluble particles that form during the coagulation process result in a massive volume of sludge which is typically left in the lagoons. Therefore, phosphorous, which is a valuable nutrient, is lost in the process. In this project, the complete recovery of phosphorous from the sludge that is produced in the process of phosphorous removal from wastewater lagoons by using a controlled composting process is investigated. Objective The main objective of this project is to compost alum precipitated sludge that is produced in the process of phosphorous removal in wastewater treatment lagoons in Manitoba. The ultimate goal is to have a product that will meet the characteristics of Class A biosolids in Canada. A number of parameters, including the bioavailability of nutrients in the composted sludge and the toxicity of the sludge, will be evaluated Investigating the bioavailability of phosphorous in the final compost product. The compost will be used as a source of P compared to a commercial fertilizer (monoammonium phosphate MAP) Experimental setup Three different batches of composts piles have been run using the Alum sludge and Ferric sludge. The alum phosphate sludge was collected from an innovative phosphorous removal system at the RM of Taché . The collected sludge was sent to ALS laboratory to analyze the C/N ratio, TP, TN, TC, TAl, moisture contents, pH, and metals concentrations. Wood chips as the bulking agent were collected at the RM of Taché landfill The sludge in the three piles were mixed with 3x dry woodchips. The mixture was turned every week manually. The temperature, the moisture content, and pH were monitored twice a week. The temperature of the mixtures was remained above 55 °C for two weeks. Each pile was kept for ten weeks to get mature. The final products have been applied to two different plants to investigate the bioavailability of P in the compost product as well as the toxicity of the product. The two types of plants were selected based on their sensitivity, growth time, and their compatibility with the Manitoba climate, which are Canola, and switchgrass. The pots are weighed and watered every day to replenish moisture lost by evapotranspiration. A control experiment is also conducted by using topsoil soil and chemical fertilizers (MAP). The experiment will be carried out in a growth room maintained at a day/night temperature regime of 25/15°C, a relative humidity of 60%, and a corresponding photoperiod of 16 h. A total of three cropping (seeding to harvest) cycles need be completed, with each cycle at 50 d in duration. Harvested biomass must be weighed and oven-dried for 72 h at 60°C. The first cycle of growth Canola and Switchgrasses in the alum sludge compost, harvested at the day 50, oven dried, chopped into bits and fine ground in a mill grinder (< 0.2mm), and digested using the wet oxidation method in which plant tissue samples were digested with H2SO4 (99.7%) and H2O2 (30%) in an acid block digester. The digested plant samples need to be analyzed to measure the amount of total phosphorus.

Keywords: wastewater treatment, phosphorus removal, composting alum sludge, bioavailibility of pohosphorus

Procedia PDF Downloads 71
11937 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 212
11936 Developing Integrated Model for Building Design and Evacuation Planning

Authors: Hao-Hsi Tseng, Hsin-Yun Lee

Abstract:

In the process of building design, the designers have to complete the spatial design and consider the evacuation performance at the same time. It is usually difficult to combine the two planning processes and it results in the gap between spatial design and evacuation performance. Then the designers cannot complete an integrated optimal design solution. In addition, the evacuation routing models proposed by previous researchers is different from the practical evacuation decisions in the real field. On the other hand, more and more building design projects are executed by Building Information Modeling (BIM) in which the design content is formed by the object-oriented framework. Thus, the integration of BIM and evacuation simulation can make a significant contribution for designers. Therefore, this research plan will establish a model that integrates spatial design and evacuation planning. The proposed model will provide the support for the spatial design modifications and optimize the evacuation planning. The designers can complete the integrated design solution in BIM. Besides, this research plan improves the evacuation routing method to make the simulation results more practical. The proposed model will be applied in a building design project for evaluation and validation when it will provide the near-optimal design suggestion. By applying the proposed model, the integration and efficiency of the design process are improved and the evacuation plan is more useful. The quality of building spatial design will be better.

Keywords: building information modeling, evacuation, design, floor plan

Procedia PDF Downloads 456
11935 The Effects of Drying Technology on Rehydration Time and Quality of Mung Bean Vermicelli

Authors: N. P. Tien, S. Songsermpong, T. H. Quan

Abstract:

Mung bean vermicelli is a popular food in Asian countries and is made from mung bean starch. The preparation process involves several steps, including drying, which affects the structure and quality of the vermicelli. This study aims to examine the effects of different drying technologies on the rehydration time and quality of mung bean vermicelli. Three drying technologies, namely hot air drying, microwave continuous drying, and microwave vacuum drying, were used for the drying process. The vermicelli strands were dried at 45°C for 12h in a hot air dryer, at 70 Hz of conveyor belt speed inverter in a microwave continuous dryer, and at 30 W.g⁻¹ of microwave power density in a microwave vacuum dryer. The results showed that mung bean vermicelli dried using hot air drying had the longest rehydration time of 12.69 minutes. On the other hand, vermicelli dried through microwave continuous drying and microwave vacuum drying had shorter rehydration times of 2.79 minutes and 2.14 minutes, respectively. Microwave vacuum drying also resulted in larger porosity, higher water absorption, and cooking loss. The tensile strength and elasticity of vermicelli dried using hot air drying were higher compared to microwave drying technologies. The sensory evaluation did not reveal significant differences in most attributes among the vermicelli treatments. Overall, microwave drying technology proved to be effective in reducing rehydration time and producing good-quality mung bean vermicelli.

Keywords: mung bean vermicelli, drying, hot air, microwave continuous, microwave vacuum

Procedia PDF Downloads 79
11934 From Theory to Practice: An Iterative Design Process in Implementing English Medium Instruction in Higher Education

Authors: Linda Weinberg, Miriam Symon

Abstract:

While few institutions of higher education in Israel offer international programs taught entirely in English, many Israeli students today can study at least one content course taught in English during their degree program. In particular, with the growth of international partnerships and opportunities for student mobility, English medium instruction is a growing phenomenon. There are however no official guidelines in Israel for how to develop and implement content courses in English and no training to help lecturers prepare for teaching their materials in a foreign language. Furthermore, the implications for the students and the nature of the courses themselves have not been sufficiently considered. In addition, the institution must have lecturers who are able to teach these courses effectively in English. An international project funded by the European Union addresses these issues and a set of guidelines which provide guidance for lecturers in adapting their courses for delivery in English have been developed. A train-the-trainer approach is adopted in order to cascade knowledge and experience in English medium instruction from experts to language teachers and on to content teachers thus maximizing the scope of professional development. To accompany training, a model English medium course has been created which serves the dual purpose of highlighting alternatives to the frontal lecture while integrating language learning objectives with content goals. This course can also be used as a standalone content course. The development of the guidelines and of the course utilized backwards, forwards and central design in an iterative process. The goals for combined language and content outcomes were identified first after which a suitable framework for achieving these goals was constructed. The assessment procedures evolved through collaboration between content and language specialists and subsequently were put into action during a piloting phase. Feedback from the piloting teachers and from the students highlight the need for clear channels of communication to encourage frank and honest discussion of expectations versus reality. While much of what goes on in the English medium classroom requires no better teaching skills than are required in any classroom, the understanding of students' abilities in achieving reasonable learning outcomes in a foreign language must be rationalized and accommodated within the course design. Concomitantly, preparatory language classes for students must be able to adapt to prepare students for specific language and cognitive skills and activities that courses conducted in English require. This paper presents findings from the implementation of a purpose-designed English medium instruction course arrived at through an iterative backwards, forwards and central design process utilizing feedback from students and lecturers alike leading to suggested guidelines for English medium instruction in higher education.

Keywords: English medium instruction, higher education, iterative design process, train-the-trainer

Procedia PDF Downloads 300
11933 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 418
11932 Hydrogen Embrittlement Properties of the Hot Stamped Carbon Steels

Authors: Mitsuhiro Okayasu, Lele Yang, Koji Shimotsu

Abstract:

The effects of microstructural characteristics on the mechanical and hydrogen embrittlement properties of 1,800MPa grade hot stamping carbon steel were investigated experimentally. The tensile strength increased with increasing the hot stamping temperature until around 921°C, but that decreased with increasing the temperature in more than 921°C due to the increment of the size of lath martensite and prior austenite. With the hot stamping process, internal strain was slightly created in the sample, which led to the slight increment of the hardness value although no clear change of the microstructural formation was detected. Severity of hydrogen embrittlement was investigated using the hot stamped carbon steels after the immersion in a hydrogen gas, and that was directly attributed to the infiltration of the hydrogen into their grain boundaries. The high strength carbon steel with tiny lath martensite microstructure could make severe hydrogen brittleness as the hydrogen was strongly penetrated in the grain boundaries in the hydrogen gas for a month. Because of weak embrittlement for the as-received carbon (ferrite and pearlite), hydrogen embrittlement is caused by the high internal strain and high dislocation density. The hydrogen embrittlement for carbon steel is attributed to amount of the hydrogen immersed in-between grain boundaries, which is caused by the dislocation density and internal strain.

Keywords: hydrogen embrittlement, hot stamping process, carbon steel, mechanical property

Procedia PDF Downloads 201