Search results for: pressure management
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12950

Search results for: pressure management

5540 ICT Applications and Gender Participation on the Sustainability of Tourism and Hospitality Industry

Authors: Ayanfulu Yekini

Abstract:

The hotel and tourism industry remains male-dominated, particularly in the upper echelons of management and ICT remained underutilized. While there is a massive revolution in this trend across the globe, it appears much progress has not been made in our nation Nigeria. This paper aimed at evaluating the relevance of ICT and Gender Participation to Sustainability of Hospitality and Tourism Industry in Nigeria. The research study was conducted in tourism organizations, travel agents, hotels, restaurants, resorts, professionals in tourism, travel and hospitality industry within Nigeria. The respondents are from the tourism/hospitality industries employees and entrepreneurs only.

Keywords: ICT, hotel, gender participation, Nigeria, tourism

Procedia PDF Downloads 443
5539 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 201
5538 Factors Affecting Ethical Leadership and Employee Affective Organizational Commitment: An Empirical Study

Authors: Sharmin Shahid, Zaher Zain

Abstract:

The purpose of this study is to explore and examines the theoretical frameworks of ethical leadership style and affective organizational commitment. Additionally, to investigate the extent to which employee orientation and ethical guidance either strengthen or weaken on the relationship between ethical leadership style and affective commitment. The study will also measure the mediating effects of leader’s integrity either influence to inspire and revival employee’s affective commitment or not. The methodology of the study comprised sample of 237 managers, departmental heads, top-level executives, and professors of several financial institutions, banks, and universities in Bangladesh who are directly related with decision making process of respective organization. A cross sectional research design will be used to examine the direct, moderating, and mediating analysis among the research key variables. Data were gathered based on personal administered questionnaire. The findings of the study will be significance because it will provide the real scenario of leadership style which leads to financial and strategic success of any organizations. In addition, the results will be interesting enough to find out either ethical leadership style have positive relationship with affective commitment or not. Employee-orientation and ethical guidance is a moderator to improve leadership style and affective commitment, whereas, leader’s integrity mediates the relationships between leadership style and affective organizational commitment to do the right thing in the right way for the betterment of entire organizational success. Research limitations of the study are the data collected by self administered questionnaire, a method with well-known shortcomings. Second, the study concentrated on financial institutions, banks top executives, and universities professors in Bangladesh. An important implication of the research is that the interesting findings will give some insight to the leadership style and helps management to focus on their management and leadership efficacy, as that could improve their affective organizational commitment. The findings will be original and unique value adding with the existing literature on leadership studies. The study is based on a comprehensive literature review. The results will be based on a sample of financial institutions, banks, and universities in Bangladesh. The research findings are useful to academics and corporate leaders of financial institutions, banks, and universities all over the world.

Keywords: affective organizational commitment, Bangladesh, ethical guidance, ethical leadership style

Procedia PDF Downloads 311
5537 Nanofiltration Membranes with Deposyted Polyelectrolytes: Caracterisation and Antifouling Potential

Authors: Viktor Kochkodan

Abstract:

The main problem arising upon water treatment and desalination using pressure driven membrane processes such as microfiltration, ultrafiltration, nanofiltration and reverse osmosis is membrane fouling that seriously hampers the application of the membrane technologies. One of the main approaches to mitigate membrane fouling is to minimize adhesion interactions between a foulant and a membrane and the surface coating of the membranes with polyelectrolytes seems to be a simple and flexible technique to improve the membrane fouling resistance. In this study composite polyamide membranes NF-90, NF-270, and BW-30 were modified using electrostatic deposition of polyelectrolyte multilayers made from various polycationic and polyanionic polymers of different molecular weights. Different anionic polyelectrolytes such as: poly(sodium 4-styrene sulfonate), poly(vinyl sulfonic acid, sodium salt), poly(4-styrene sulfonic acid-co-maleic acid) sodium salt, poly(acrylic acid) sodium salt (PA) and cationic polyelectrolytes such as poly(diallyldimethylammonium chloride), poly(ethylenimine) and poly(hexamethylene biguanide were used for membrane modification. An effect of deposition time and a number of polyelectrolyte layers on the membrane modification has been evaluated. It was found that degree of membrane modification depends on chemical nature and molecular weight of polyelectrolytes used. The surface morphology of the prepared composite membranes was studied using atomic force microscopy. It was shown that the surface membrane roughness decreases significantly as a number of the polyelectrolyte layers on the membrane surface increases. This smoothening of the membrane surface might contribute to the reduction of membrane fouling as lower roughness most often associated with a decrease in surface fouling. Zeta potentials and water contact angles on the membrane surface before and after modification have also been evaluated to provide addition information regarding membrane fouling issues. It was shown that the surface charge of the membranes modified with polyelectrolytes could be switched between positive and negative after coating with a cationic or an anionic polyelectrolyte. On the other hand, the water contact angle was strongly affected when the outermost polyelectrolyte layer was changed. Finally, a distinct difference in the performance of the noncoated membranes and the polyelectrolyte modified membranes was found during treatment of seawater in the non-continuous regime. A possible mechanism of the higher fouling resistance of the modified membranes has been discussed.

Keywords: contact angle, membrane fouling, polyelectrolytes, surface modification

Procedia PDF Downloads 244
5536 Development of a Miniature and Low-Cost IoT-Based Remote Health Monitoring Device

Authors: Sreejith Jayachandran, Mojtaba Ghods, Morteza Mohammadzaheri

Abstract:

The modern busy world is running behind new embedded technologies based on computers and software; meanwhile, some people forget to do their health condition and regular medical check-ups. Some of them postpone medical check-ups due to a lack of time and convenience, while others skip these regular evaluations and medical examinations due to huge medical bills and hospital expenses. Engineers and medical experts have come together to give birth to a new device in the telemonitoring system capable of monitoring, checking, and evaluating the health status of the human body remotely through the internet for the needs of all kinds of people. The remote health monitoring device is a microcontroller-based embedded unit. Various types of sensors in this device are connected to the human body, and with the help of an Arduino UNO board, the required analogue data is collected from the sensors. The microcontroller on the Arduino board processes the analogue data collected in this way into digital data and transfers that information to the cloud, and stores it there, and the processed digital data is instantly displayed through the LCD attached to the machine. By accessing the cloud storage with a username and password, the concerned person’s health care teams/doctors and other health staff can collect this data for the assessment and follow-up of that patient. Besides that, the family members/guardians can use and evaluate this data for awareness of the patient's current health status. Moreover, the system is connected to a Global Positioning System (GPS) module. In emergencies, the concerned team can position the patient or the person with this device. The setup continuously evaluates and transfers the data to the cloud, and also the user can prefix a normal value range for the evaluation. For example, the blood pressure normal value is universally prefixed between 80/120 mmHg. Similarly, the RHMS is also allowed to fix the range of values referred to as normal coefficients. This IoT-based miniature system (11×10×10) cm³ with a low weight of 500 gr only consumes 10 mW. This smart monitoring system is manufactured with 100 GBP, which can be used not only for health systems, it can be used for numerous other uses including aerospace and transportation sections.

Keywords: embedded technology, telemonitoring system, microcontroller, Arduino UNO, cloud storage, global positioning system, remote health monitoring system, alert system

Procedia PDF Downloads 76
5535 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling

Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak

Abstract:

Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.

Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling

Procedia PDF Downloads 294
5534 Estimation of Carbon Losses in Rice: Wheat Cropping System of Punjab, Pakistan

Authors: Saeed Qaisrani

Abstract:

The study was conducted to observe carbon and nutrient loss by burning of rice residues on rice-wheat cropping system The rice crop was harvested to conduct the experiment in a randomized complete block design (RCBD) with factors and 4 replications with a net plot size of 10 m x 20 m. Rice stubbles were managed by two methods i.e. Incorporation & burning of rice residues. Soil samples were taken to a depth of 30 cm before sowing & after harvesting of wheat. Wheat was sown after harvesting of rice by three practices i.e. Conventional tillage, Minimum tillage and Zero tillage to observe best tillage practices. Laboratory and field experiments were conducted on wheat to assess best tillage practice and residues management method with estimation of carbon losses. Data on the following parameters; establishment count, plant height, spike length, number of grains per spike, biological yield, fat content, carbohydrate content, protein content, and harvest index were recorded to check wheat quality & ensuring food security in the region. Soil physico-chemical analysis i.e. pH, electrical conductivity, organic matter, nitrogen, phosphorus, potassium, and carbon were done in soil fertility laboratory. Substantial results were found on growth, yield and related parameters of wheat crop. The collected data were examined statistically with economic analysis to estimate the cost-benefit ratio of using different tillage techniques and residue management practices. Obtained results depicted that Zero tillage method have positive impacts on growth, yield and quality of wheat, Moreover, it is cost effective methodology. Similarly, Incorporation is suitable and beneficial method for soil due to more nutrients provision and reduce the need of fertilizers. Burning of rice stubbles has negative impact including air pollution, nutrient loss, microbes died and carbon loss. Recommended the zero tillage technology to reduce carbon losses along with food security in Pakistan.

Keywords: agricultural agronomy, food security, carbon sequestration, rice-wheat cropping system

Procedia PDF Downloads 270
5533 Importance of Human Resources Training in an Information Age

Authors: A. Serap Fırat

Abstract:

The aim of this study is to display conceptually the relationship and interaction between matter of human resources training and the information age. Fast development from industrial community to an information community has occurred and organizations have been seeking ways to overcome this change. Human resources policy and human capital with enhanced competence will have direct impact on work performance; therefore, this paper deals with the increased importance of human resource management due to the fact that it nurtures human capital. Researching and scanning are used as a method in this study. Both local and foreign literature and expert views are employed -as much as one could be- in the making of the theoretical framework of this study.

Keywords: human resources, information age, education, organization, occupation

Procedia PDF Downloads 360
5532 Application of Electronic Nose Systems in Medical and Food Industries

Authors: Khaldon Lweesy, Feryal Alskafi, Rabaa Hammad, Shaker Khanfar, Yara Alsukhni

Abstract:

Electronic noses are devices designed to emulate the humane sense of smell by characterizing and differentiating odor profiles. In this study, we build a low-cost e-nose using an array module containing four different types of metal oxide semiconductor gas sensors. We used this system to create a profile for a meat specimen over three days. Then using a pattern recognition software, we correlated the odor of the specimen to its age. It is a simple, fast detection method that is both non-expensive and non-destructive. The results support the usage of this technology in food control management.

Keywords: e-nose, low cost, odor detection, food safety

Procedia PDF Downloads 125
5531 Effect of Water Addition on Catalytic Activity for CO2 Purification from Oxyfuel Combustion

Authors: Joudia Akil, Stephane Siffert, Laurence Pirault-Roy, Renaud Cousin, Christophe Poupin

Abstract:

Oxyfuel combustion is a promising method that enables to obtain a CO2 rich stream, with water vapor ( ̴10%), unburned components such as CO and NO, which must be cleaned before the use of CO2. Our objective is then the final treatment of CO and NO by catalysis. Three-way catalysts are well-developed material for simultaneous conversion of NO, CO and hydrocarbons. Pt and/or Rh ensure a quasi-complete removal of NOx, CO and HC and there is also a growing interest in partly replacing Pt with less-expensive Pd. The use of alumina and ceria as support ensures, respectively, the stabilization of such species in active state and discharging or storing oxygen to control the oxidation of CO and HC and the reduction of NOx. In this work, we will compare different metals (Pd, Rh and Pt) supported on Al2O3 and CeO2, for CO2 purification from oxyfuel combustion. The catalyst must reduce NO by CO in an oxidizing environment, in the presence of CO2 rich stream and resistant to water. In this study, Al2O3 and CeO2 were used as support materials of the catalysts. 1wt% M/Support where M = Pd, Rh or Pt catalysts were obtained by wet impregnation on supports with a precursor of palladium [Pd(acac)2], rhodium [Rh(NO3)3] and platinum [Pt(NO2)2(NO3)2]. Materials were characterized by BET surface area, H2 chemisorption, and TEM. Catalytic activity was evaluated in CO2 purification which is carried out in a fixed-bed flow reactor containing 150 mg of catalyst at atmospheric pressure. The flow of the reactant gases is composed of: 20% CO2, 10% O2, 0.5% CO, 0.02% NO and 8.2% H2O (He as eluent gas) with a total flow of 200 mL.min−1, with same GHSV (2.24x104 h-1). The catalytic performances of the samples were investigated with and without water. It shows that the total oxidation of CO occurred over the different materials. This study evidenced an important effect of the nature of the metals, supports and the presence or absence of H2O during the reduction of NO by CO in oxyfuel combustions conditions. Rh based catalysts show that the addition of water has a very positive influence especially on the Rh catalyst on CeO2. Pt based catalysts keep a good activity despite the addition of water on the both supports studied. For the NO reduction, addition of water act as a poison with Pd catalysts. The interesting results of Rh based catalysts with water can be explained by a production of hydrogen through the water gas shift reaction. The produced hydrogen acts as a more effective reductant than CO for NO removal. Furthermore, in TWCs, Rh is the main component responsible for NOx reduction due to its especially high activity for NO dissociation. Moreover, cerium oxide is a promotor for WGSR.

Keywords: carbon dioxide, environmental chemistry, heterogeneous catalysis

Procedia PDF Downloads 176
5530 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 28
5529 Assessing Social Sustainability for Biofuels Supply Chains: The Case of Jet Biofuel in Brazil

Authors: Z. Wang, F. Pashaei Kamali, J. A. Posada Duque, P. Osseweijer

Abstract:

Globally, the aviation sector is seeking for sustainable solutions to comply with the pressure to reduce greenhouse gas emissions. Jet fuels derived from biomass are generally perceived as a sustainable alternative compared with their fossil counterparts. However, the establishment of jet biofuels supply chains will have impacts on environment, economy, and society. While existing studies predominantly evaluated environmental impacts and techno-economic feasibility of jet biofuels, very few studies took the social / socioeconomic aspect into consideration. Therefore, this study aims to provide a focused evaluation of social sustainability for aviation biofuels with a supply chain perspective. Three potential jet biofuel supply chains based on different feedstocks, i.e. sugarcane, eucalyptus, and macauba were analyzed in the context of Brazil. The assessment of social sustainability is performed with a process-based approach combined with input-output analysis. Over the supply chains, a set of social sustainability issues including employment, working condition (occupational accident and wage level), labour right, education, equity, social development (GDP and trade balance) and food security were evaluated in a (semi)quantitative manner. The selection of these social issues is based on two criteria: (1) the issues are highly relevant and important to jet biofuel production; (2) methodologies are available for assessing these issues. The results show that the three jet biofuel supply chains lead to a differentiated level of social effects. The sugarcane-based supply chain creates the highest number of jobs whereas the biggest contributor of GDP turns out to be the macauba-based supply chain. In comparison, the eucalyptus-based supply chain stands out regarding working condition. It is also worth noting that biojet fuel supply chain with high level of social benefits could result in high level of social concerns (such as occupational accident, violation of labour right and trade imbalance). Further research is suggested to investigate the possible interactions between different social issues. In addition, the exploration of a wider range of social effects is needed to expand the comprehension of social sustainability for biofuel supply chains.

Keywords: biobased supply chain, jet biofuel, social assessment, social sustainability, socio-economic impacts

Procedia PDF Downloads 259
5528 Review of Health Disparities in Migrants Attending the Emergency Department with Acute Mental Health Presentations

Authors: Jacqueline Eleonora Ek, Michael Spiteri, Chris Giordimaina, Pierre Agius

Abstract:

Background: Malta is known for being a key player as a frontline country with regard to irregular immigration from Africa to Europe. Every year the island experiences an influx of migrants as boat movement across the Mediterranean continues to be a humanitarian challenge. Irregular immigration and applying for asylum is both a lengthy and mentally demanding process. Those doing so are often faced with multiple challenges, which can adversely affect their mental health. Between January and August 2020, Malta disembarked 2 162 people rescued at sea, 463 of them between July & August. Given the small size of the Maltese islands, this regulation places a disproportionately large burden on the country, creating a backlog in the processing of asylum applications resulting in increased time periods of detention. These delays reverberate throughout multiple management pathways resulting in prolonged periods of detention and challenging access to health services. Objectives: To better understand the spatial dimensions of this humanitarian crisis, this study aims to assess disparities in the acute medical management of migrants presenting to the emergency department (ED) with acute mental health presentations as compared to that of local and non-local residents. Method: In this retrospective study, 17795 consecutive ED attendances were reviewed to look for acute mental health presentations. These were further evaluated to assess discrepancies in transportation routes to hospital, nature of presenting complaint, effects of language barriers, use of CT brain, treatment given at ED, availability of psychiatric reviews, and final admission/discharge plans. Results: Of the ED attendances, 92.3% were local residents, and 7.7% were non-locals. Of the non-locals, 13.8% were migrants, and 86.2% were other-non-locals. Acute mental health presentations were seen in 1% of local residents; this increased to 20.6% in migrants. 56.4% of migrants attended with deliberate self-harm; this was lower in local residents, 28.9%. Contrastingly, in local residents, the most common presenting complaint was suicidal thought/ low mood 37.3%, the incidence was similar in migrants at 33.3%. The main differences included 12.8% of migrants presenting with refused oral intake while only 0.6% of local residents presented with the same complaints. 7.7% of migrants presented with a reduced level of consciousness, no local residents presented with this same issue. Physicians documented a language barrier in 74.4% of migrants. 25.6% were noted to be completely uncommunicative. Further investigations included the use of a CT scan in 12% of local residents and in 35.9% of migrants. The most common treatment administered to migrants was supportive fluids 15.4%, the most common in local residents was benzodiazepines 15.1%. Voluntary psychiatric admissions were seen in 33.3% of migrants and 24.7% of locals. Involuntary admissions were seen in 23% of migrants and 13.3% of locals. Conclusion: Results showed multiple disparities in health management. A meeting was held between entities responsible for migrant health in Malta, including the emergency department, primary health care, migrant detention services, and Malta Red Cross. Currently, national quality-improvement initiatives are underway to form new pathways to improve patient-centered care. These include an interpreter unit, centralized handover sheets, and a dedicated migrant health service.

Keywords: emergency department, communication, health, migration

Procedia PDF Downloads 101
5527 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid

Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov

Abstract:

Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.

Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics

Procedia PDF Downloads 75
5526 Model Driven Architecture Methodologies: A Review

Authors: Arslan Murtaza

Abstract:

Model Driven Architecture (MDA) is technique presented by OMG (Object Management Group) for software development in which different models are proposed and converted them into code. The main plan is to identify task by using PIM (Platform Independent Model) and transform it into PSM (Platform Specific Model) and then converted into code. In this review paper describes some challenges and issues that are faced in MDA, type and transformation of models (e.g. CIM, PIM and PSM), and evaluation of MDA-based methodologies.

Keywords: OMG, model driven rrchitecture (MDA), computation independent model (CIM), platform independent model (PIM), platform specific model(PSM), MDA-based methodologies

Procedia PDF Downloads 446
5525 Flood Early Warning and Management System

Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare

Abstract:

The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.

Keywords: flood, modeling, HPC, FOSS

Procedia PDF Downloads 81
5524 Placement of Inflow Control Valve for Horizontal Oil Well

Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj

Abstract:

Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.

Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation

Procedia PDF Downloads 403
5523 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 261
5522 Multifunctional Epoxy/Carbon Laminates Containing Carbon Nanotubes-Confined Paraffin for Thermal Energy Storage

Authors: Giulia Fredi, Andrea Dorigato, Luca Fambri, Alessandro Pegoretti

Abstract:

Thermal energy storage (TES) is the storage of heat for later use, thus filling the gap between energy request and supply. The most widely used materials for TES are the organic solid-liquid phase change materials (PCMs), such as paraffin. These materials store/release a high amount of latent heat thanks to their high specific melting enthalpy, operate in a narrow temperature range and have a tunable working temperature. However, they suffer from a low thermal conductivity and need to be confined to prevent leakage. These two issues can be tackled by confining PCMs with carbon nanotubes (CNTs). TES applications include the buildings industry, solar thermal energy collection and thermal management of electronics. In most cases, TES systems are an additional component to be added to the main structure, but if weight and volume savings are key issues, it would be advantageous to embed the TES functionality directly in the structure. Such multifunctional materials could be employed in the automotive industry, where the diffusion of lightweight structures could complicate the thermal management of the cockpit environment or of other temperature sensitive components. This work aims to produce epoxy/carbon structural laminates containing CNT-stabilized paraffin. CNTs were added to molten paraffin in a fraction of 10 wt%, as this was the minimum amount at which no leakage was detected above the melting temperature (45°C). The paraffin/CNT blend was cryogenically milled to obtain particles with an average size of 50 µm. They were added in various percentages (20, 30 and 40 wt%) to an epoxy/hardener formulation, which was used as a matrix to produce laminates through a wet layup technique, by stacking five plies of a plain carbon fiber fabric. The samples were characterized microstructurally, thermally and mechanically. Differential scanning calorimetry (DSC) tests showed that the paraffin kept its ability to melt and crystallize also in the laminates, and the melting enthalpy was almost proportional to the paraffin weight fraction. These thermal properties were retained after fifty heating/cooling cycles. Laser flash analysis showed that the thermal conductivity through the thickness increased with an increase of the PCM, due to the presence of CNTs. The ability of the developed laminates to contribute to the thermal management was also assessed by monitoring their cooling rates through a thermal camera. Three-point bending tests showed that the flexural modulus was only slightly impaired by the presence of the paraffin/CNT particles, while a more sensible decrease of the stress and strain at break and the interlaminar shear strength was detected. Optical and scanning electron microscope images revealed that these could be attributed to the preferential location of the PCM in the interlaminar region. These results demonstrated the feasibility of multifunctional structural TES composites and highlighted that the PCM size and distribution affect the mechanical properties. In this perspective, this group is working on the encapsulation of paraffin in a sol-gel derived organosilica shell. Submicron spheres have been produced, and the current activity focuses on the optimization of the synthesis parameters to increase the emulsion efficiency.

Keywords: carbon fibers, carbon nanotubes, lightweight materials, multifunctional composites, thermal energy storage

Procedia PDF Downloads 152
5521 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 192
5520 Chaotic Dynamics of Cost Overruns in Oil and Gas Megaprojects: A Review

Authors: O. J. Olaniran, P. E. D. Love, D. J. Edwards, O. Olatunji, J. Matthews

Abstract:

Cost overruns are a persistent problem in oil and gas megaprojects. Whilst the extant literature is filled with studies on incidents and causes of cost overruns, underlying theories to explain their emergence in oil and gas megaprojects are few. Yet, a way to contain the syndrome of cost overruns is to understand the bases of ‘how and why’ they occur. Such knowledge will also help to develop pragmatic techniques for better overall management of oil and gas megaprojects. The aim of this paper is to explain the development of cost overruns in hydrocarbon megaprojects through the perspective of chaos theory. The underlying principles of chaos theory and its implications for cost overruns are examined and practical recommendations proposed. In addition, directions for future research in this fertile area provided.

Keywords: chaos theory, oil and gas, cost overruns, megaprojects

Procedia PDF Downloads 550
5519 Learning Curve Effect on Materials Procurement Schedule of Multiple Sister Ships

Authors: Vijaya Dixit Aasheesh Dixit

Abstract:

Shipbuilding industry operates in Engineer Procure Construct (EPC) context. Product mix of a shipyard comprises of various types of ships like bulk carriers, tankers, barges, coast guard vessels, sub-marines etc. Each order is unique based on the type of ship and customized requirements, which are engineered into the product right from design stage. Thus, to execute every new project, a shipyard needs to upgrade its production expertise. As a result, over the long run, holistic learning occurs across different types of projects which contributes to the knowledge base of the shipyard. Simultaneously, in the short term, during execution of a project comprising of multiple sister ships, repetition of similar tasks leads to learning at activity level. This research aims to capture above learnings of a shipyard and incorporate learning curve effect in project scheduling and materials procurement to improve project performance. Extant literature provides support for the existence of such learnings in an organization. In shipbuilding, there are sequences of similar activities which are expected to exhibit learning curve behavior. For example, the nearly identical structural sub-blocks which are successively fabricated, erected, and outfitted with piping and electrical systems. Learning curve representation can model not only a decrease in mean completion time of an activity, but also a decrease in uncertainty of activity duration. Sister ships have similar material requirements. The same supplier base supplies materials for all the sister ships within a project. On one hand, this provides an opportunity to reduce transportation cost by batching the order quantities of multiple ships. On the other hand, it increases the inventory holding cost at shipyard and the risk of obsolescence. Further, due to learning curve effect the production scheduled of each consequent ship gets compressed. Thus, the material requirement schedule of every next ship differs from its previous ship. As more and more ships get constructed, compressed production schedules increase the possibility of batching the orders of sister ships. This work aims at integrating materials management with project scheduling of long duration projects for manufacturing of multiple sister ships. It incorporates the learning curve effect on progressively compressing material requirement schedules and addresses the above trade-off of transportation cost and inventory holding and shortage costs while satisfying budget constraints of various stages of the project. The activity durations and lead time of items are not crisp and are available in the form of probabilistic distribution. A Stochastic Mixed Integer Programming (SMIP) model is formulated which is solved using evolutionary algorithm. Its output provides ordering dates of items and degree of order batching for all types of items. Sensitivity analysis determines the threshold number of sister ships required in a project to leverage the advantage of learning curve effect in materials management decisions. This analysis will help materials managers to gain insights about the scenarios: when and to what degree is it beneficial to treat a multiple ship project as an integrated one by batching the order quantities and when and to what degree to practice distinctive procurement for individual ship.

Keywords: learning curve, materials management, shipbuilding, sister ships

Procedia PDF Downloads 492
5518 Tapping Traditional Environmental Knowledge: Lessons for Disaster Policy Formulation in India

Authors: Aparna Sengupta

Abstract:

The paper seeks to find answers to the question as to why India’s disaster management policies have been unable to deliver the desired results. Are the shortcomings in policy formulation, effective policy implementation or timely prevention mechanisms? Or is there a fundamental issue of policy formulation which sparsely takes into account the cultural specificities and uniqueness, technological know-how, educational, religious and attitudinal capacities of the target population into consideration? India was slow in legislating disaster policies but more than that the reason for lesser success of disaster polices seems to be the gap between policy and the people. We not only keep hearing about the failure of governmental efforts but also how the local communities deal far more efficaciously with disasters utilizing their traditional knowledge. The 2004 Indian Ocean tsunami which killed 250,000 people (approx.) could not kill the tribal communities who saved themselves due to their age-old traditional knowledge. This large scale disaster, considered as a landmark event in history of disasters in the twenty-first century, can be attributed in bringing and confirming the importance of Traditional Environmental Knowledge in managing disasters. This brings forth the importance of cultural and traditional know-how in dealing with natural disasters and one is forced to question as to why shouldn’t traditional environmental knowledge (TEK) be taken into consideration while formulating India’s disaster resilience policies? Though at the international level, many scholars have explored the connectedness of disaster to cultural dimensions and several research examined how culture acts as a stimuli in perceiving disasters and their management (Clifford, 1956; Mcluckie, 1970; Koentjaraningrat, 1985; Peacock, 1997; Elliot et.al, 2006; Aruntoi, 2008; Kulatunga, 2010). But in the Indian context, this field of inquiry i.e. linking disaster policies with tradition and generational understanding has seldom received attention of the government, decision- making authorities, disaster managers and even in the academia. The present study attempts to fill this gap in research and scholarship by presenting an historical analysis of disaster and its cognition by cultural communities in India. The paper seeks to interlink the cultural comprehension of Indian tribal communities with scientific-technology towards more constructive disaster policies in India.

Keywords: culture, disasters, local communities, traditional knowledge

Procedia PDF Downloads 96
5517 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions

Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira

Abstract:

Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).

Keywords: change management, technology acceptance model, organizational change, health and safety

Procedia PDF Downloads 35
5516 Enhancing Seismic Resilience in Urban Environments

Authors: Beatriz González-rodrigo, Diego Hidalgo-leiva, Omar Flores, Claudia Germoso, Maribel Jiménez-martínez, Laura Navas-sánchez, Belén Orta, Nicola Tarque, Orlando Hernández- Rubio, Miguel Marchamalo, Juan Gregorio Rejas, Belén Benito-oterino

Abstract:

Cities facing seismic hazard necessitate detailed risk assessments for effective urban planning and vulnerability identification, ensuring the safety and sustainability of urban infrastructure. Comprehensive studies involving seismic hazard, vulnerability, and exposure evaluations are pivotal for estimating potential losses and guiding proactive measures against seismic events. However, broad-scale traditional risk studies limit consideration of specific local threats and identify vulnerable housing within a structural typology. Achieving precise results at neighbourhood levels demands higher resolution seismic hazard exposure, and vulnerability studies. This research aims to bolster sustainability and safety against seismic disasters in three Central American and Caribbean capitals. It integrates geospatial techniques and artificial intelligence into seismic risk studies, proposing cost-effective methods for exposure data collection and damage prediction. The methodology relies on prior seismic threat studies in pilot zones, utilizing existing exposure and vulnerability data in the region. Emphasizing detailed building attributes enables the consideration of behaviour modifiers affecting seismic response. The approach aims to generate detailed risk scenarios, facilitating prioritization of preventive actions pre-, during, and post-seismic events, enhancing decision-making certainty. Detailed risk scenarios necessitate substantial investment in fieldwork, training, research, and methodology development. Regional cooperation becomes crucial given similar seismic threats, urban planning, and construction systems among involved countries. The outcomes hold significance for emergency planning and national and regional construction regulations. The success of this methodology depends on cooperation, investment, and innovative approaches, offering insights and lessons applicable to regions facing moderate seismic threats with vulnerable constructions. Thus, this framework aims to fortify resilience in seismic-prone areas and serves as a reference for global urban planning and disaster management strategies. In conclusion, this research proposes a comprehensive framework for seismic risk assessment in high-risk urban areas, emphasizing detailed studies at finer resolutions for precise vulnerability evaluations. The approach integrates regional cooperation, geospatial technologies, and adaptive fragility curve adjustments to enhance risk assessment accuracy, guiding effective mitigation strategies and emergency management plans.

Keywords: assessment, behaviour modifiers, emergency management, mitigation strategies, resilience, vulnerability

Procedia PDF Downloads 59
5515 Performance Evaluation of On-Site Sewage Treatment System (Johkasou)

Authors: Aashutosh Garg, Ankur Rajpal, A. A. Kazmi

Abstract:

The efficiency of an on-site wastewater treatment system named Johkasou was evaluated based on its pollutant removal efficiency over 10 months. This system was installed at IIT Roorkee and had a capacity of treating 7 m3/d of sewage water, sufficient for a group of 30-50 people. This system was fed with actual wastewater through an equalization tank to eliminate the fluctuations throughout the day. Methanol and ammonium chloride was added into this equalization tank to increase the Chemical Oxygen Demand (COD) and ammonia content of the influent. The outlet from Johkasou is sent to a tertiary unit consisting of a Pressure Sand Filter and an Activated Carbon Filter for further treatment. Samples were collected on alternate days from Monday to Friday and the following parameters were evaluated: Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN). The Average removal efficiency for Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN) was observed as 89.6, 97.7, 96, and 80% respectively. The cost of treating the wastewater comes out to be Rs 23/m3 which includes electricity, cleaning and maintenance, chemical, and desludging costs. Tests for the coliforms were also performed and it was observed that the removal efficiency for total and fecal coliforms was 100%. The sludge generation rate is approximately 20% of the BOD removal and it needed to be removed twice a year. It also showed a very good response against the hydraulic shock load. We performed vacation stress analysis on the system to evaluate the performance of the system when there is no influent for 8 consecutive days. From the result of stress analysis, we concluded that system needs a recovery time of about 48 hours to stabilize. After about 2 days, the system returns again to original conditions and all the parameters in the effluent become within the limits of National Green Tribunal (NGT) standards. We also performed another stress analysis to save the electricity in which we turned the main aeration blower off for 2 to 12 hrs a day and the results showed that we can turn the blower off for about 4-6 hrs a day and this will help in reducing the electricity costs by about 25%. It was concluded that the Johkasou system can remove a sufficient amount of all the physiochemical parameters tested to satisfy the prescribed limit set as per Indian Standard.

Keywords: on-site treatment, domestic wastewater, Johkasou, nutrient removal, pathogens removal

Procedia PDF Downloads 107
5514 Towards a Vulnerability Model Assessment of The Alexandra Jukskei Catchment in South Africa

Authors: Vhuhwavho Gadisi, Rebecca Alowo, German Nkhonjera

Abstract:

This article sets out to detail an investigation of groundwater management in the Juksei Catchment of South Africa through spatial mapping of key hydrological relationships, interactions, and parameters in catchments. The Department of Water Affairs (DWA) noted gaps in the implementation of the South African National Water Act 1998: article 16, including the lack of appropriate models for dealing with water quantity parameters. For this reason, this research conducted a drastic GIS-based groundwater assessment to improve groundwater monitoring system in the Juksei River basin catchment of South Africa. The methodology employed was a mixed-methods approach/design that involved the use of DRASTIC analysis, questionnaire, literature review and observations to gather information on how to help people who use the Juskei River. GIS (geographical information system) mapping was carried out using a three-parameter DRASTIC (Depth to water, Recharge, Aquifer media, Soil media, Topography, Impact of the vadose zone, Hydraulic conductivity) vulnerability methodology. In addition, the developed vulnerability map was subjected to sensitivity analysis as a validation method. This approach included single-parameter sensitivity, sensitivity to map deletion, and correlation analysis of DRASTIC parameters. The findings were that approximately 5.7% (45km2) of the area in the northern part of the Juksei watershed is highly vulnerable. Approximately 53.6% (428.8 km^2) of the basin is also at high risk of groundwater contamination. This area is mainly located in the central, north-eastern, and western areas of the sub-basin. The medium and low vulnerability classes cover approximately 18.1% (144.8 km2) and 21.7% (168 km2) of the Jukskei River, respectively. The shallow groundwater of the Jukskei River belongs to a very vulnerable area. Sensitivity analysis indicated that water depth, water recharge, aquifer environment, soil, and topography were the main factors contributing to the vulnerability assessment. The conclusion is that the final vulnerability map indicates that the Juksei catchment is highly susceptible to pollution, and therefore, protective measures are needed for sustainable management of groundwater resources in the study area.

Keywords: contamination, DRASTIC, groundwater, vulnerability, model

Procedia PDF Downloads 77
5513 Linguistic World Order in the 21st Century: Need of Alternative Linguistics

Authors: Shailendra Kumar Singh

Abstract:

In the 21st century, we are living through extraordinary times as we are linguistically blessed to live through an era in which the each sociolinguistic example of living appears to be refreshingly new without any precedence of the past. The word `New Linguistic World Order’ is no longer just the intangible fascination but an indication of the emerging reality that we are living through a time in which the word ‘linguistic purism’ no longer invokes the sense of self categorization and self identification. The contemporary world of today is linguistically rewarding. This is a time in which the very existence of global, powerful and local needs to be revisited in the context of power shift, demographic shift, social psychological shift and technological shift. Hence, the old linguistic world view has to be challenged in the midst of 21st century. The first years of the 21st century have thus far been marked by the rise global economy, technological revolution and demographic shift, now we are witnessing linguistic shift which is leading towards forming a new linguistic world order. On the other hand, with rising powers of China and India in Asia in tandem the notion of alternative west is set to become a lot more interesting linguistically. It comes at a point when the world is moving towards inclusive globalization due to vanishing power corridor of the west and ascending geopolitical impact of emerging superpower and superpower in waiting. Now it is a reality that the western world no longer continues to rise – in fact, it will have more pressure to act in situation when the alternative west is looking for balanced globalization. It is more than likely that demographically strong languages of alternative west will be in advantageous position. The paper challenges our preconceptions about the nature of sociolinguistic nature of world in the 21st century. It investigates what a linguistic world is likely to be in the future in contrast to what was a linguistic world before 21st century. In particular, the paper tries to answer the following questions: (a) What will be the common linguistic thread across world? (b) How unprecedented transformations can be mapped linguistically? (c) Do we need alternative linguistics to define inclusive globalization as the linguistic reality of the contemporary world has already been reshaped by increasingly integrated world economy, linguistic revolution and alternative west? (d) In which ways these issues can be addressed holistically? (e) Why linguistic world order is changing dramatically? (f) Is it true that the linguistic world around is changing faster than we can even really cope? (g) Is it true that what is coming next is linguistically greater than ever? (h) Do we need to prepare ourselves with new theoretical strategies to address emerging sociolinguistic reality?

Keywords: alternative linguistics, new linguistic world order, power shift, demographic shift, social psychological shift, technological shift

Procedia PDF Downloads 323
5512 The Relationship between Risk and Capital: Evidence from Indian Commercial Banks

Authors: Seba Mohanty, Jitendra Mahakud

Abstract:

Capital ratio is one of the major indicators of the stability of the commercial banks. Pertinent to its pervasive importance, over the years the regulators, policy makers focus on the maintenance of the particular level of capital ratio to minimize the solvency and liquidation risk. In this context, it is very much important to identify the relationship between capital and risk and find out the factors which determine the capital ratios of commercial banks. The study examines the relationship between capital and risk of the commercial banks operating in India. Other bank specific variables like bank size, deposit, profitability, non-performing assets, bank liquidity, net interest margin, loan loss reserves, deposits variability and regulatory pressure are also considered for the analysis. The period of study is 1997-2015 i.e. the period of post liberalization. To identify the impact of financial crisis and implementation of Basel II on capital ratio, we have divided the whole period into two sub-periods i.e. 1997-2008 and 2008-2015. This study considers all the three types of commercial banks, i.e. public sector, the private sector and foreign banks, which have continuous data for the whole period. The main sources of data are Prowess data base maintained by centre for monitoring Indian economy (CMIE) and Reserve Bank of India publications. We use simultaneous equation model and more specifically Two Stage Least Square method to find out the relationship between capital and risk. From the econometric analysis, we find that capital and risk affect each other simultaneously, and this is consistent across the time period and across the type of banks. Moreover, regulation has a positive significant impact on the ratio of capital to risk-weighted assets, but no significant impact on the banks risk taking behaviour. Our empirical findings also suggest that size has a negative impact on capital and risk, indicating that larger banks increase their capital less than the other banks supported by the too-big-to-fail hypothesis. This study contributes to the existing body of literature by predicting a strong relationship between capital and risk in an emerging economy, where banking sector plays a majority role for financial development. Further this study may be considered as a primary study to find out the macro economic factors which affecting risk and capital in India.

Keywords: capital, commercial bank, risk, simultaneous equation model

Procedia PDF Downloads 315
5511 Leveraging Deep Q Networks in Portfolio Optimization

Authors: Peng Liu

Abstract:

Deep Q networks (DQNs) represent a significant advancement in reinforcement learning, utilizing neural networks to approximate the optimal Q-value for guiding sequential decision processes. This paper presents a comprehensive introduction to reinforcement learning principles, delves into the mechanics of DQNs, and explores its application in portfolio optimization. By evaluating the performance of DQNs against traditional benchmark portfolios, we demonstrate its potential to enhance investment strategies. Our results underscore the advantages of DQNs in dynamically adjusting asset allocations, offering a robust portfolio management framework.

Keywords: deep reinforcement learning, deep Q networks, portfolio optimization, multi-period optimization

Procedia PDF Downloads 11