Search results for: operating costs
770 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 279769 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator
Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya
Abstract:
Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.Keywords: DTL, focusing, PMQ, proton, rate earth magnets
Procedia PDF Downloads 472768 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints
Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno
Abstract:
Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.Keywords: battery energy storage, power system stability, system strength, weak power system
Procedia PDF Downloads 61767 Batch and Dynamic Investigations on Magnesium Separation by Ion Exchange Adsorption: Performance and Cost Evaluation
Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed
Abstract:
Ion exchange adsorption has a long standing history of success for seawater softening and selective ion removal from saline sources. Strong, weak and mixed types ion exchange systems could be designed and optimized for target separation. In this paper, different types of adsorbents comprising zeolite 13X and kaolin, in addition to, poly acrylate/zeolite (AZ), poly acrylate/kaolin (AK) and stand-alone poly acrylate (A) hydrogel types were prepared via microwave (M) and ultrasonic (U) irradiation techniques. They were characterized using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The developed adsorbents were evaluated on bench scale level and based on assessment results, a composite bed has been formulated for performance evaluation in pilot scale column investigations. Owing to the hydrogel nature of the partially crosslinked poly acrylate, the developed adsorbents manifested a swelling capacity of about 50 g/g. The pilot trials have been carried out using magnesium enriched Red Seawater to simulate Red Seawater desalination brine. Batch studies indicated varying uptake efficiencies, where Mg adsorption decreases according to the following prepared hydrogel types AU>AM>AKM>AKU>AZM>AZU, being 108, 107, 78, 69, 66 and 63 mg/g, respectively. Composite bed adsorbent tested in the up-flow mode column studies indicated good performance for Mg uptake. For an operating cycle of 12 h, the maximum uptake during the loading cycle approached 92.5-100 mg/g, which is comparable to the performance of some commercial resins. Different regenerants have been explored to maximize regeneration and minimize the quantity of regenerants including 15% NaCl, 0.1 M HCl and sodium carbonate. Best results were obtained by acidified sodium chloride solution. In conclusion, developed cation exchange adsorbents comprising clay or zeolite support indicated adequate performance for Mg recovery under saline environment. Column design operated at the up-flow mode (approaching expanded bed) is appropriate for such type of separation. Preliminary cost indicators for Mg recovery via ion exchange have been developed and analyzed.Keywords: batch and dynamic magnesium separation, seawater, polyacrylate hydrogel, cost evaluation
Procedia PDF Downloads 135766 Positivity Rate of Person under Surveillance among Institut Jantung Negara’s Patients with Various Vaccination Statuses in the First Quarter of 2022, Malaysia
Authors: Mohd Izzat Md. Nor, Norfazlina Jaffar, Noor Zaitulakma Md. Zain, Nur Izyanti Mohd Suppian, Subhashini Balakrishnan, Geetha Kandavello
Abstract:
During the Coronavirus (COVID-19) pandemic, Malaysia has been focusing on building herd immunity by introducing vaccination programs into the community. Hospital Standard Operating Procedures (SOP) were developed to prevent inpatient transmission. Objective: In this study, we focus on the positivity rate of inpatient Person Under Surveillance (PUS) becoming COVID-19 positive and compare this to the National rate in order to see the outcomes of the patient who becomes COVID-19 positive in relation to their vaccination status. Methodology: This is a retrospective observational study carried out from 1 January until 30 March 2022 in Institut Jantung Negara (IJN). There were 5,255 patients admitted during the time of this study. Pre-admission Polymerase Chain Reaction (PCR) swab was done for all patients. Patients with positive PCR on pre-admission screening were excluded. The patient who had exposure to COVID-19-positive staff or patients during hospitalization was defined as PUS and were quarantined and monitored for potential COVID-19 infection. Their frequency and risk of exposure (WHO definition) were recorded. A repeat PCR swab was done for PUS patients that have clinical deterioration with or without COVID symptoms and on their last day of quarantine. The severity of COVID-19 infection was defined as category 1-5A. All patients' vaccination status was recorded, and they were divided into three groups: fully immunised, partially immunised, and unvaccinated. We analyzed the positivity rate of PUS patients becoming COVID-positive, outcomes, and correlation with the vaccination status. Result: Total inpatient PUS to patients and staff was 492; only 13 became positive, giving a positivity rate of 2.6%. Eight (62%) had multiple exposures. The majority, 8/13(72.7%), had a high-risk exposure, and the remaining 5 had medium-risk exposure. Four (30.8%) were boostered, 7(53.8%) were fully vaccinated, and 2(15.4%) were partial/unvaccinated. Eight patients were in categories 1-2, whilst 38% were in categories 3-5. Vaccination status did not correlate with COVID-19 Category (P=0.641). One (7.7%) patient died due to COVID-19 complications and sepsis. Conclusion: Within the first quarter of 2022, our institution's positivity rate (2.6%) is significantly lower than the country's (14.4%). High-risk exposure and multiple exposures to positive COVID-19 cases increased the risk of PUS becoming COVID-19 positive despite their underlying vaccination status.Keywords: COVID-19, boostered, high risk, Malaysia, quarantine, vaccination status
Procedia PDF Downloads 88765 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 141764 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid
Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov
Abstract:
Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics
Procedia PDF Downloads 80763 Sustainability Assessment Tool for the Selection of Optimal Site Remediation Technologies for Contaminated Gasoline Sites
Authors: Connor Dunlop, Bassim Abbassi, Richard G. Zytner
Abstract:
Life cycle assessment (LCA) is a powerful tool established by the International Organization for Standardization (ISO) that can be used to assess the environmental impacts of a product or process from cradle to grave. Many studies utilize the LCA methodology within the site remediation field to compare various decontamination methods, including bioremediation, soil vapor extraction or excavation, and off-site disposal. However, with the authors' best knowledge, limited information is available in the literature on a sustainability tool that could be used to help with the selection of the optimal remediation technology. This tool, based on the LCA methodology, would consider site conditions like environmental, economic, and social impacts. Accordingly, this project was undertaken to develop a tool to assist with the selection of optimal sustainable technology. Developing a proper tool requires a large amount of data. As such, data was collected from previous LCA studies looking at site remediation technologies. This step identified knowledge gaps or limitations within project data. Next, utilizing the data obtained from the literature review and other organizations, an extensive LCA study is being completed following the ISO 14040 requirements. Initial technologies being compared include bioremediation, excavation with off-site disposal, and a no-remediation option for a generic gasoline-contaminated site. To complete the LCA study, the modelling software SimaPro is being utilized. A sensitivity analysis of the LCA results will also be incorporated to evaluate the impact on the overall results. Finally, the economic and social impacts associated with each option will then be reviewed to understand how they fluctuate at different sites. All the results will then be summarized, and an interactive tool using Excel will be developed to help select the best sustainable site remediation technology. Preliminary LCA results show improved sustainability for the decontamination of a gasoline-contaminated site for each technology compared to the no-remediation option. Sensitivity analyses are now being completed on on-site parameters to determine how the environmental impacts fluctuate at other contaminated gasoline locations as the parameters vary, including soil type and transportation distances. Additionally, the social improvements and overall economic costs associated with each technology are being reviewed. Utilizing these results, the sustainability tool created to assist in the selection of the overall best option will be refined.Keywords: life cycle assessment, site remediation, sustainability tool, contaminated sites
Procedia PDF Downloads 58762 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia
Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva
Abstract:
In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.Keywords: value, risk, creation, problems, organization
Procedia PDF Downloads 284761 Evaluation of NoSQL in the Energy Marketplace with GraphQL Optimization
Authors: Michael Howard
Abstract:
The growing popularity of electric vehicles in the United States requires an ever-expanding infrastructure of commercial DC fast charging stations. The U.S. Department of Energy estimates 33,355 publicly available DC fast charging stations as of September 2023. In 2017, 115,370 gasoline stations were operating in the United States, much more ubiquitous than DC fast chargers. Range anxiety is an important impediment to the adoption of electric vehicles and is even more relevant in underserved regions in the country. The peer-to-peer energy marketplace helps fill the demand by allowing private home and small business owners to rent their 240 Volt, level-2 charging facilities. The existing, publicly accessible outlets are wrapped with a Cloud-connected microcontroller managing security and charging sessions. These microcontrollers act as Edge devices communicating with a Cloud message broker, while both buyer and seller users interact with the framework via a web-based user interface. The database storage used by the marketplace framework is a key component in both the cost of development and the performance that contributes to the user experience. A traditional storage solution is the SQL database. The architecture and query language have been in existence since the 1970s and are well understood and documented. The Structured Query Language supported by the query engine provides fine granularity with user query conditions. However, difficulty in scaling across multiple nodes and cost of its server-based compute have resulted in a trend in the last 20 years towards other NoSQL, serverless approaches. In this study, we evaluate the NoSQL vs. SQL solutions through a comparison of Google Cloud Firestore and Cloud SQL MySQL offerings. The comparison pits Google's serverless, document-model, non-relational, NoSQL against the server-base, table-model, relational, SQL service. The evaluation is based on query latency, flexibility/scalability, and cost criteria. Through benchmarking and analysis of the architecture, we determine whether Firestore can support the energy marketplace storage needs and if the introduction of a GraphQL middleware layer can overcome its deficiencies.Keywords: non-relational, relational, MySQL, mitigate, Firestore, SQL, NoSQL, serverless, database, GraphQL
Procedia PDF Downloads 62760 Being an English Language Teaching Assistant in China: Understanding the Identity Evolution of Early-Career English Teacher in Private Tutoring Schools
Authors: Zhou Congling
Abstract:
The integration of private tutoring has emerged as an indispensable facet in the acquisition of language proficiency beyond formal educational settings. Notably, there has been a discernible surge in the demand for private English tutoring, specifically geared towards the preparation for internationally recognized gatekeeping examinations, such as IELTS, TOEFL, GMAT, and GRE. This trajectory has engendered an escalating need for English Language Teaching Assistants (ELTAs) operating within the realm of Private Tutoring Schools (PTSs). The objective of this study is to unravel the intricate process by which these ELTAs formulate their professional identities in the nascent stages of their careers as English educators, as well as to delineate their perceptions regarding their professional trajectories. The construct of language teacher identity is inherently multifaceted, shaped by an amalgamation of individual, societal, and cultural determinants, exerting a profound influence on how language educators navigate their professional responsibilities. This investigation seeks to scrutinize the experiential and influential factors that mold the identities of ELTAs in PTSs, particularly post the culmination of their language-oriented academic programs. Employing a qualitative narrative inquiry approach, this study aims to delve into the nuanced understanding of how ELTAs conceptualize their professional identities and envision their future roles. The research methodology involves purposeful sampling and the conduct of in-depth, semi-structured interviews with ten participants. Data analysis will be conducted utilizing Barkhuizen’s Short Story Analysis, a method designed to explore a three-dimensional narrative space, elucidating the intricate interplay of personal experiences and societal contexts in shaping the identities of ELTAs. The anticipated outcomes of this study are poised to contribute substantively to a holistic comprehension of ELTA identity formation, holding practical implications for diverse stakeholders within the private tutoring sector. This research endeavors to furnish insights into strategies for the retention of ELTAs and the enhancement of overall service quality within PTSs.Keywords: China, English language teacher, narrative inquiry, private tutoring school, teacher identity
Procedia PDF Downloads 56759 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 171758 Ethiopian Textile and Apparel Industry: Study of the Information Technology Effects in the Sector to Improve Their Integrity Performance
Authors: Merertu Wakuma Rundassa
Abstract:
Global competition and rapidly changing customer requirements are forcing major changes in the production styles and configuration of manufacturing organizations. Increasingly, traditional centralized and sequential manufacturing planning, scheduling, and control mechanisms are being found insufficiently flexible to respond to changing production styles and highly dynamic variations in product requirements. The traditional approaches limit the expandability and reconfiguration capabilities of the manufacturing systems. Thus many business houses face increasing pressure to lower production cost, improve production quality and increase responsiveness to customers. In a textile and apparel manufacturing, globalization has led to increase in competition and quality awareness and these industries have changed tremendously in the last few years. So, to sustain competitive advantage, companies must re-examine and fine-tune their business processes to deliver high quality goods at very low costs and it has become very important for the textile and apparel industries to integrate themselves with information technology to survive. IT can create competitive advantages for companies to improve coordination and communication among trading partners, increase the availability of information for intermediaries and customers and provide added value at various stages along the entire chain. Ethiopia is in the process of realizing its potential as the future sourcing location for the global textile and garments industry. With a population of over 90 million people and the fastest growing non-oil economy in Africa, Ethiopia today represents limitless opportunities for international investors. For the textile and garments industry Ethiopia promises a low cost production location with natural resources such as cotton to enable the setup of vertically integrated textile and garment operation. However; due to lack of integration of their business activities textile and apparel industry of Ethiopia faced a problem in that it can‘t be competent in the global market. On the other hand the textile and apparel industries of other countries have changed tremendously in the last few years and globalization has led to increase in competition and quality awareness. So the aim of this paper is to study the trend of Ethiopian Textile and Apparel Industry on the application of different IT system to integrate them in the global market.Keywords: information technology, business integrity, textile and apparel industries, Ethiopia
Procedia PDF Downloads 363757 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 129756 Advancing Microstructure Evolution in Tungsten Through Rolling in Laser Powder Bed Fusion
Authors: Narges Shayesteh Moghaddam
Abstract:
Tungsten (W), a refractory metal known for its remarkably high melting temperature, offers tremendous potential for use in challenging environments prevalent in sectors such as space exploration, defense, and nuclear industries. Additive manufacturing, especially the Laser Powder-Bed Fusion (LPBF) technique, emerges as a beneficial method for fabricating tungsten parts. This technique enables the production of intricate components while simultaneously reducing production lead times and associated costs. However, the inherent brittleness of tungsten and its tendency to crack under high-temperature conditions pose significant challenges to the manufacturing process. Our research primarily focuses on the process of rolling tungsten parts in a layer-by-layer manner in LPBF and the subsequent changes in microstructure. Our objective is not only to identify the alterations in the microstructure but also to assess their implications on the physical properties and performance of the fabricated tungsten parts. To examine these aspects, we conducted an extensive series of experiments that included the fabrication of tungsten samples through LPBF and subsequent characterization using advanced materials analysis techniques. These investigations allowed us to scrutinize shifts in various microstructural features, including, but not limited to, grain size and grain boundaries occurring during the rolling process. The results of our study provide crucial insights into how specific factors, such as plastic deformation occurring during the rolling process, influence the microstructural characteristics of the fabricated parts. This information is vital as it provides a foundation for understanding how the parameters of the layer-by-layer rolling process affect the final tungsten parts. Our research significantly broadens the current understanding of microstructural evolution in tungsten parts produced via the layer-by-layer rolling process in LPBF. The insights obtained will play a pivotal role in refining and optimizing manufacturing parameters, thus improving the mechanical properties of tungsten parts and, therefore, enhancing their performance. Furthermore, these findings will contribute to the advancement of manufacturing techniques, facilitating the wider application of tungsten parts in various high-demand sectors. Through these advancements, this research represents a significant step towards harnessing the full potential of tungsten in high-temperature and high-stress applications.Keywords: additive manufacturing, rolling, tungsten, refractory materials
Procedia PDF Downloads 98755 Investigating the Effects of Cylinder Disablement on Diesel Engine Fuel Economy and Exhaust Temperature Management
Authors: Hasan Ustun Basaran
Abstract:
Diesel engines are widely used in transportation sector due to their high thermal efficiency. However, they also release high rates of NOₓ and PM (particulate matter) emissions into the environment which have hazardous effects on human health. Therefore, environmental protection agencies have issued strict emission regulations on automotive diesel engines. Recently, these regulations are even increasingly strengthened. Engine producers search novel on-engine methods such as advanced combustion techniques, utilization of renewable fuels, exhaust gas recirculation, advanced fuel injection methods or use exhaust after-treatment (EAT) systems in order to reduce emission rates on diesel engines. Although those aforementioned on-engine methods are effective to curb emission rates, they result in inefficiency or cannot decrease emission rates satisfactorily at all operating conditions. Therefore, engine manufacturers apply both on-engine techniques and EAT systems to meet the stringent emission norms. EAT systems are highly effective to diminish emission rates, however, they perform inefficiently at low loads due to low exhaust gas temperatures (below 250°C). Therefore, the objective of this study is to demonstrate that engine-out temperatures can be elevated above 250°C at low-loaded cases via cylinder disablement. The engine studied and modeled via Lotus Engine Simulation (LES) software is a six-cylinder turbocharged and intercooled diesel engine. Exhaust temperatures and mass flow rates are predicted at 1200 rpm engine speed and several low loaded conditions using LES program. It is seen that cylinder deactivation results in a considerable exhaust temperature rise (up to 100°C) at low loads which ensures effective EAT management. The method also improves fuel efficiency through reduced total pumping loss. Decreased total air induction due to inactive cylinders is thought to be responsible for improved engine pumping loss. The technique reduces exhaust gas flow rate as air flow is cut off on disabled cylinders. Still, heat transfer rates to the after-treatment catalyst bed do not decrease that much since exhaust temperatures are increased sufficiently. Simulation results are promising; however, further experimental studies are needed to identify the true potential of the method on fuel consumption and EAT improvement.Keywords: cylinder disablement, diesel engines, exhaust after-treatment, exhaust temperature, fuel efficiency
Procedia PDF Downloads 176754 N-Glycosylation in the Green Microalgae Chlamydomonas reinhardtii
Authors: Pierre-Louis Lucas, Corinne Loutelier-Bourhis, Narimane Mati-Baouche, Philippe Chan Tchi-Song, Patrice Lerouge, Elodie Mathieu-Rivet, Muriel Bardor
Abstract:
N-glycosylation is a post-translational modification taking place in the Endoplasmic Reticulum and the Golgi apparatus where defined glycan features are added on protein in a very specific sequence Asn-X-Thr/Ser/Cys were X can be any amino acid except proline. Because it is well-established that those N-glycans play a critical role in protein biological activity, protein half-life and that a different N-glycan structure may induce an immune response, they are very important in Biopharmaceuticals which are mainly glycoproteins bearing N-glycans. From now, most of the biopharmaceuticals are produced by mammalian cells like Chinese Hamster Ovary cells (CHO) for their N-glycosylation similar to the human, but due to the high production costs, several other species are investigated as the possible alternative system. In this purpose, the green microalgae Chlamydomonas reinhardtii was investigated as the potential production system for Biopharmaceuticals. This choice was influenced by the facts that C. reinhardtii is a well-study microalgae which is growing fast with a lot of molecular biology tools available. This organism is also producing N-glycan on its endogenous proteins. However, the analysis of the N-glycan structure of this microalgae has revealed some differences as compared to the human. Rather than in Human where the glycans are processed by key enzymes called N-acetylglucosaminyltransferase I and II (GnTI and GnTII) adding GlcNAc residue to form a GlcNAc₂Man₃GlcNAc₂ core N-glycan, C. reinhardtii lacks those two enzymes and possess a GnTI independent glycosylation pathway. Moreover, some enzymes like xylosyltransferases and methyltransferases not present in human are supposed to act on the glycans of C. reinhardtii. Furthermore, the recent structural study by mass spectrometry shows that the N-glycosylation precursor supposed to be conserved in almost all eukaryotic cells results in a linear Man₅GlcNAc₂ rather than a branched one in C. reinhardtii. In this work, we will discuss the new released MS information upon C. reinhardtii N-glycan structure and their impact on our attempt to modify the glycan in a Human manner. Two strategies will be discussed. The first one consisted in the study of Xylosyltransferase insertional mutants from the CLIP library in order to remove xyloses from the N-glycans. The second will go further in the humanization by transforming the microalgae with the exogenous gene from Toxoplasma gondii having an activity similar to GnTI and GnTII with the aim to synthesize GlcNAc₂Man₃GlcNAc₂ in C. reinhardtii.Keywords: Chlamydomonas reinhardtii, N-glycosylation, glycosyltransferase, mass spectrometry, humanization
Procedia PDF Downloads 178753 Factors in a Sustainability Assessment of New Types of Closed Cavity Facades
Authors: Zoran Veršić, Josip Galić, Marin Binički, Lucija Stepinac
Abstract:
With the current increase in CO₂ emissions and global warming, the sustainability of both existing and new solutions must be assessed on a wide scale. As the implementation of closed cavity facades (CCF) is on the rise, a variety of factors must be included in the analysis of new types of CCF. This paper aims to cover the relevant factors included in the sustainability assessment of new types of CCF. Several mathematical models are being used to describe the physical behavior of CCF. Depending on the type of CCF, they cover the main factors which affect the durability of the façade: thermal behavior of various elements in the façade, stress, and deflection of the glass panels, pressure inside a cavity, exchange rate, and the moisture buildup in the cavity. CCF itself represents a complex system in which all mentioned factors must be considered mutually. Still, the façade is only an envelope of a more complex system, the building. Choice of the façade dictates the heat loss and the heat gain, thermal comfort of inner space, natural lighting, and ventilation. Annual consumption of energy for heating, cooling, lighting, and maintenance costs will present the operational advantages or disadvantages of the chosen façade system in both the economic and environmental aspects. Still, the only operational viewpoint is not all-inclusive. As the building codes constantly demand higher energy efficiency as well as transfer to renewable energy sources, the ratio of embodied and lifetime operational energy footprint of buildings is changing. With the drop in operational energy CO₂ emissions, embodied energy emissions present a larger and larger share in the lifecycle emissions of the building. Taken all into account, the sustainability assessment of a façade, as well as other major building elements, should include all mentioned factors during the lifecycle of an element. The challenge of such an approach is a timescale. Depending on the climatic conditions on the building site, the expected lifetime of CCF can exceed 25 years. In such a time span, some of the factors can be estimated more precisely than others. The ones depending on the socio-economic conditions are more likely to be harder to predict than the natural ones like the climatic load. This work recognizes and summarizes the relevant factors needed for the assessment of new types of CCF, considering the entire lifetime of a façade element and economic and environmental aspects.Keywords: assessment, closed cavity façade, life cycle, sustainability
Procedia PDF Downloads 192752 The Relationship between Risk and Capital: Evidence from Indian Commercial Banks
Authors: Seba Mohanty, Jitendra Mahakud
Abstract:
Capital ratio is one of the major indicators of the stability of the commercial banks. Pertinent to its pervasive importance, over the years the regulators, policy makers focus on the maintenance of the particular level of capital ratio to minimize the solvency and liquidation risk. In this context, it is very much important to identify the relationship between capital and risk and find out the factors which determine the capital ratios of commercial banks. The study examines the relationship between capital and risk of the commercial banks operating in India. Other bank specific variables like bank size, deposit, profitability, non-performing assets, bank liquidity, net interest margin, loan loss reserves, deposits variability and regulatory pressure are also considered for the analysis. The period of study is 1997-2015 i.e. the period of post liberalization. To identify the impact of financial crisis and implementation of Basel II on capital ratio, we have divided the whole period into two sub-periods i.e. 1997-2008 and 2008-2015. This study considers all the three types of commercial banks, i.e. public sector, the private sector and foreign banks, which have continuous data for the whole period. The main sources of data are Prowess data base maintained by centre for monitoring Indian economy (CMIE) and Reserve Bank of India publications. We use simultaneous equation model and more specifically Two Stage Least Square method to find out the relationship between capital and risk. From the econometric analysis, we find that capital and risk affect each other simultaneously, and this is consistent across the time period and across the type of banks. Moreover, regulation has a positive significant impact on the ratio of capital to risk-weighted assets, but no significant impact on the banks risk taking behaviour. Our empirical findings also suggest that size has a negative impact on capital and risk, indicating that larger banks increase their capital less than the other banks supported by the too-big-to-fail hypothesis. This study contributes to the existing body of literature by predicting a strong relationship between capital and risk in an emerging economy, where banking sector plays a majority role for financial development. Further this study may be considered as a primary study to find out the macro economic factors which affecting risk and capital in India.Keywords: capital, commercial bank, risk, simultaneous equation model
Procedia PDF Downloads 327751 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 25750 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 68749 Alternative Islamic Finance Channels and Instruments: An Evaluation of the Potential and Considerations in Light of Sharia Principles
Authors: Tanvir A. Uddin, Blake Goud
Abstract:
Emerging trends in FinTech-enabled alternative finance, which includes channels and instruments emerging outside the traditional financial system, heralds unprecedented opportunities to improve financial intermediation and increase access to finance. With widespread criticism of the mainstream Islamic banking and finance sector as either mimicking the conventional system, failing to achieve inclusive growth or both, industry stakeholders are turning to technology to show that finance can be done differently. This paper will outline the critical elements for successful deployment of technology to maximize benefit and minimize potential for harm from introduction of Islamic FinTech and propose recommendations for Islamic financial institutions, FinTech companies, regulators and other stakeholders who are integrating or who are considering introducing FinTech solutions. The paper will present an overview of literature, present relevant case studies and summarize the lessons from interviews conducted with Islamic FinTech founders from around the world. With growing central bank concerns about leveraged loans and ballooning private credit markets globally (estimated at $1.4 trillion), current and future Islamic FinTech operators are at risk of contributing to the problems they aim to solve by operating in a 'shadow banking' system. The paper will show that by systematising a robust theory of change linked to positive outcomes, utilising objective impact frameworks (e.g., the Impact Measurement Project) and instilling a risk management culture that is proactive about potential social harm (e.g., irresponsible lending), FinTech can enable the Islamic finance industry to support positive social impact and minimize harm in support of the maqasid. The adoption of FinTech within the Islamic finance context is still at a nascent stage and the recommendations we provide based on the limited experience to date will help address some of the major cross-cutting issues related to FinTech. Further research will be needed to elucidate in more detail issues relating to individual sectors and countries within the broader global Islamic finance industry.Keywords: alternative finance, FinTech, Islamic finance, maqasid, theory of change
Procedia PDF Downloads 154748 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling
Authors: Ahmad Odeh, Ahmad Jrade
Abstract:
Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.Keywords: building information modelling, energy, life cycle analysis, sustainablity
Procedia PDF Downloads 269747 Identifying the Determinants of Compliance with Maritime Environmental Legislation in the North and Baltic Sea Area: A Model Developed from Exploratory Qualitative Data Collection
Authors: Thea Freese, Michael Gille, Andrew Hursthouse, John Struthers
Abstract:
Ship operators on the North and Baltic Sea have been experiencing increased political interest in marine environmental protection and cleaner vessel operations. Stricter legislation on SO2 and NOx emissions, ballast water management and other measures of protection are currently being phased in or will come into force in the coming years. These measures benefit the health of the marine environment, while increasing company’s operational costs. In times of excess shipping capacity and linked consolidation in the industry non-compliance with environmental rules is one way companies might hope to stay competitive with both intra- and inter-modal trade. Around 5-15% of industry participants are believed to neglect laws on vessel-source pollution willingly or unwillingly. Exploratory in-depth interviews conducted with 12 experts from various stakeholder groups informed the researchers about variables influencing compliance levels, including awareness and apprehension, willingness to comply, ability to comply and effectiveness of controls. Semi-structured expert interviews were evaluated using qualitative content analysis. A model of determinants of compliance was developed and is presented here. While most vessel operators endeavour to achieve full compliance with environmental rules, a lack of availability of technical solutions, expediency of implementation and operation and economic feasibility might prove a hindrance. Ineffective control systems on the other hand foster willing non-compliance. With respect to motivations, lacking time, lacking financials and the absence of commercial advantages decrease compliance levels. These and other variables were inductively developed from qualitative data and integrated into a model on environmental compliance. The outcomes presented here form part of a wider research project on economic effects of maritime environmental legislation. Research on determinants of compliance might inform policy-makers about actual behavioural effects of shipping companies and might further the development of a comprehensive legal system for environmental protection.Keywords: compliance, marine environmental protection, exploratory qualitative research study, clean vessel operations, North and Baltic Sea area
Procedia PDF Downloads 383746 Hybrid Fermentation System for Improvement of Ergosterol Biosynthesis
Authors: Alexandra Tucaliuc, Alexandra C. Blaga, Anca I. Galaction, Lenuta Kloetzer, Dan Cascaval
Abstract:
Ergosterol (ergosta-5,7,22-trien-3β-ol), also known as provitamin D2, is the precursor of vitamin D2 (ergocalciferol), because it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). In the yeasts cells, ergosterol is accumulated in membranes, especially in free form in the plasma membrane, but also as esters with fatty acids in membrane lipids. The chemical synthesis of ergosterol does not represent an efficient method for its production, in these circumstances, the most attractive alternative for producing ergosterol at larger-scale remains the aerobic fermentation using S. cerevisiae on glucose or by-products from agriculture of food industry as substrates, in batch or fed-batch operating systems. The aim of this work is to analyze comparatively the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. The effects of the studied factors are quantitatively described by means of the mathematical correlations proposed for each of the two fermentation systems, valid both for the absence and presence of oxygen-vector inside the broth. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. n-Dodecane was used as oxygen-vector and the ergosterol content inside the yeasts cells has been considered at the fermentation moment related to the maximum concentration of ergosterol, 9 hrs for batch process and 20 hrs for fed-batch one. Ergosterol biosynthesis is strongly dependent on the dissolved oxygen concentration. The hydrocarbon concentration exhibits a significant influence on ergosterol production mainly by accelerating the oxygen transfer rate. Regardless of n-dodecane addition, by maintaining the glucose concentration at a constant level in the fed-batch process, the amount of ergosterol accumulated into the yeasts cells has been almost tripled. In the presence of hydrocarbon, the ergosterol concentration increased by over 50%. The value of oxygen-vector concentration corresponding to the maximum level of ergosterol depends mainly on biomass concentration, due to its negative influences on broth viscosity and interfacial phenomena of air bubbles blockage through the adsorption of hydrocarbon droplets–yeast cells associations. Therefore, for the batch process, the maximum ergosterol amount was reached for 5% vol. n-dodecane, while for the fed-batch process for 10% vol. hydrocarbon.Keywords: bioreactors, ergosterol, fermentation, oxygen-vector
Procedia PDF Downloads 189745 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand
Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat
Abstract:
Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting
Procedia PDF Downloads 189744 The Role and Tasks of a Social Worker in the Care of a Terminally Ill Child with Regard to the Malopolska Hospice for Children
Authors: Ewelina Zdebska
Abstract:
A social worker is an integral part of an interdisciplinary team working with the child and his family in a terminal state. Social support is an integral part of the medical procedure in the care of hospice. This is the basis and prerequisite of full treatment and good care of the child - patient, whose illness often finds at least the expected period of his life when his personal and legal issues are not regulated, and the family burdened with the problem requires care and support specialists - professionals. Hospice for Children in Krakow: a palliative care team operating in the province of Krakow and Malopolska, conducts specialized care for terminally ill children in place of their residence from the time when parents and doctors decided to end of treatment in hospital, allows parents to carry out medical care at home, provides parents social and legal assistance and provides care, psychological support and friendship to families throughout the life of the child's illness and after his death, as long as it is needed. The social worker in a hospice does not bear the burden of solving social problems, which is the responsibility of other authorities, but provides support possible and necessary at the moment. The most common form of assistance is to provide information on benefits, which for the child and his family may be subject to any treatment and fight for the life and health of a child. Employee assists in the preparation and completion of documents, requests to increase the degree of disability because of progressive disease or Allowance care because of the inability to live independently. It works in settling all the issues with the Department of Social Security, as well as with the Municipal and District Team Affairs of disability. Seeking help and support using multi-faceted childcare. With the Centres for Social Welfare contacts are also often on the organization of additional respite care for the sick at home (care), especially in the work of the other members of the family or if the family can not cope with the care and needs extra help. Hospice for Children in Cracow completing construction of Poland's first Respite Care Centre for chronically and terminally ill children, will be an open house where children suffering from chronic and incurable diseases and their families can get professional help, whenever - when they need it. The social worker has to pick up a very important role in caring for a terminally ill child. His presence gives a little patient and family the opportunity to be at this difficult time together while organizing assistance and support.Keywords: social worker, care, terminal care, hospice
Procedia PDF Downloads 248743 Environmental Accounting: A Conceptual Study of Indian Context
Authors: Pradip Kumar Das
Abstract:
As the entire world continues its rapid move towards industrialization, it has seriously threatened mankind’s ability to maintain an ecological balance. Geographical and natural forces have a significant influence on the location of industries. Industrialization is the foundation stone of the development of any country, while the unplanned industrialization and discharge of waste by industries is the cause of environmental pollution. There is growing degree of awareness and concern globally among nations about environmental degradation or pollution. Environmental resources endowed by the gift of nature and not manmade are invaluable natural resources of a country like India. Any developmental activity is directly related to natural and environmental resources. Economic development without environmental considerations brings about environmental crises and damages the quality of life of present, as well as future generation. As corporate sectors in the global market, especially in India, are becoming anxious about environmental degradation, naturally more and more emphasis will be ascribed to how environment-friendly the outcomes are. Maintaining accounts of such environmental and natural resources in the country has become more urgent. Moreover, international awareness and acceptance of the importance of environmental issues has motivated the development of a branch of accounting called “Environmental Accounting”. Environmental accounting attempts to detect and focus the resources consumed and the costs rendered by an industrial unit to the environment. For the sustainable development of mankind, a healthy environment is indispensable. Gradually, therefore, in many countries including India, environment matters are being given top most priority. Accounting and disclosure of environmental matters have been increasingly manifesting as an important dimension of corporate accounting and reporting practices. But, as conventional accounting deals with mainly non-living things, the formulation of valuation, and measurement and accounting techniques for incorporating environment-related matters in the corporate financial statement sometimes creates problems for the accountant. In the light of this situation, the conceptual analysis of the study is concerned with the rationale of environmental accounting on the economy and society as a whole, and focuses the failures of the traditional accounting system. A modest attempt has been made to throw light on the environmental awareness in developing nations like India and discuss the problems associated with the implementation of environmental accounting. The conceptual study also reflects that despite different anomalies, environmental accounting is becoming an increasing important aspect of the accounting agenda within the corporate sector in India. Lastly, a conclusion, along with recommendations, has been given to overcome the situation.Keywords: environmental accounting, environmental degradation, environmental management, environmental resources
Procedia PDF Downloads 343742 Research on the Environmental Assessment Index of Brownfield Redevelopment in Taiwan: A Case Study on Formosa Chemicals and Fibre Corporation, Changhua Branch
Authors: Min-Chih Yang, Shih-Jen Feng, Bo-Tsang Li
Abstract:
The concept of “Brownfield” has been developed for nearly 35 years since it was put forward in 《Comprehensive Environmental Response, Compensation, and Liability Act, CERCLA》of USA in 1980 for solving the problem of soil contamination of those old industrial lands, and later, many countries have put forward relevant policies and researches continuously. But the related concept in Taiwan, a country has developed its industry for 60 years, is still in its infancy. This leads to the slow development of Brownfield related research and policy in Taiwan. When it comes to build the foundation of Brownfield development, we have to depend on the related experience and research of other countries. They are four aspects about Brownfield: 1. Contaminated Land; 2. Derelict Land; 3. Vacant Land; 4. Previously Development Land. This study will focus on and deeply investigate the Vacant land and contaminated land. The subject of this study is Formosa Chemicals & Fibre Corporation, Changhua branch in Taiwan. It has been operating for nearly 50 years and contributing a lot to the local economy. But under the influence of the toxic waste and sewage which was drained regularly or occasionally out from the factory, the environment has been destroyed seriously. There are three factors of pollution: 1. environmental toxicants, carbon disulfide, released from producing processes and volatile gases which is hard to monitor; 2. Waste and exhaust gas leakage caused by outdated equipment; 3. the wastewater discharge has seriously damage the ecological environment of the Dadu river estuary. Because of all these bad influences, the factory has been closed nowadays and moved to other places to spare the opportunities for the contaminated lands to re-develop. So we collect information about related Brownfield management experience and policies in different countries as background information to investigate the current Taiwanese Brownfield redevelopment issues and built the environmental assessment framework for it. We hope that we can set the environmental assessment indexes for Formosa Chemicals & Fibre Corporation, Changhua branch according to the framework. By investigating the theory and environmental pollution factors, we will carry out deep analysis and expert questionnaire to set those indexes and prove a sample in Taiwan for Brownfield redevelopment and remediation in the future.Keywords: brownfield, industrial land, redevelopment, assessment index
Procedia PDF Downloads 400741 The Investigation of Work Stress and Burnout in Nurse Anesthetists: A Cross-Sectional Study
Authors: Yen Ling Liu, Shu-Fen Wu, Chen-Fuh Lam, I-Ling Tsai, Chia-Yu Chen
Abstract:
Purpose: Nurse anesthetists are confronting extraordinarily high job stress in their daily practice, deriving from the fast-track anesthesia care, risk of perioperative complications, routine rotating shifts, teaching programs and interactions with the surgical team in the operating room. This study investigated the influence of work stress on the burnout and turnover intention of nurse anesthetists in a regional general hospital in Southern Taiwan. Methods: This was a descriptive correlational study carried out in 66 full-time nurse anesthetists. Data was collected from March 2017 to June 2017 by in-person interview, and a self-administered structured questionnaire was completed by the interviewee. Outcome measurements included the Practice Environment Scale of the Nursing Work Index (PES-NWI), Maslach Burnout Inventory (MBI) and nursing staff turnover intention. Numerical data were analyzed by descriptive statistics, independent t test, or one-way ANOVA. Categorical data were compared using the chi-square test (x²). Datasets were computed with Pearson product-moment correlation and linear regression. Data were analyzed by using SPSS 20.0 software. Results: The average score for job burnout was 68.7916.67 (out of 100). The three major components of burnout, including emotional depletion (mean score of 26.32), depersonalization (mean score of 13.65), and personal(mean score of 24.48). These average scores suggested that these nurse anesthetists were at high risk of burnout and inversely correlated with turnover intention (t = -4.048, P < 0.05). Using linear regression model, emotional exhaustion and depersonalization were the two independent factors that predicted turnover intention in the nurse anesthetists (19.1% in total variance). Conclusion/Implications for Practice: The study identifies that the high risk of job burnout in the nurse anesthetists is not simply derived from physical overload, but most likely resulted from the additional emotional and psychological stress. The occurrence of job burnout may affect the quality of nursing work, and also influence family harmony, in turn, may increase the turnover rate. Multimodal approach is warranted to reduce work stress and job burnout in nurse anesthetists to enhance their willingness to contribute in anesthesia care.Keywords: anesthesia nurses, burnout, job, turnover intention
Procedia PDF Downloads 296