Search results for: operational feasibility
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2480

Search results for: operational feasibility

350 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 29
349 Seismic Impact and Design on Buried Pipelines

Authors: T. Schmitt, J. Rosin, C. Butenweg

Abstract:

Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.

Keywords: buried pipeline, earthquake, seismic impact, transient displacement

Procedia PDF Downloads 187
348 Experimental Study of Vibration Isolators Made of Expanded Cork Agglomerate

Authors: S. Dias, A. Tadeu, J. Antonio, F. Pedro, C. Serra

Abstract:

The goal of the present work is to experimentally evaluate the feasibility of using vibration isolators made of expanded cork agglomerate. Even though this material, also known as insulation cork board (ICB), has mainly been studied for thermal and acoustic insulation purposes, it has strong potential for use in vibration isolation. However, the adequate design of expanded cork blocks vibration isolators will depend on several factors, such as excitation frequency, static load conditions and intrinsic dynamic behavior of the material. In this study, transmissibility tests for different static and dynamic loading conditions were performed in order to characterize the material. Since the material’s physical properties can influence the vibro-isolation performance of the blocks (in terms of density and thickness), this study covered four mass density ranges and four block thicknesses. A total of 72 expanded cork agglomerate specimens were tested. The test apparatus comprises a vibration exciter connected to an excitation mass that holds the test specimen. The test specimens under characterization were loaded successively with steel plates in order to obtain results for different masses. An accelerometer was placed at the top of these masses and at the base of the excitation mass. The test was performed for a defined frequency range, and the amplitude registered by the accelerometers was recorded in time domain. For each of the signals (signal 1- vibration of the excitation mass, signal 2- vibration of the loading mass) a fast Fourier transform (FFT) was applied in order to obtain the frequency domain response. For each of the frequency domain signals, the maximum amplitude reached was registered. The ratio between the amplitude (acceleration) of signal 2 and the amplitude of signal 1, allows the calculation of the transmissibility for each frequency. Repeating this procedure allowed us to plot a transmissibility curve for a certain frequency range. A number of transmissibility experiments were performed to assess the influence of changing the mass density and thickness of the expanded cork blocks and the experimental conditions (static load and frequency of excitation). The experimental transmissibility tests performed in this study showed that expanded cork agglomerate blocks are a good option for mitigating vibrations. It was concluded that specimens with lower mass density and larger thickness lead to better performance, with higher vibration isolation and a larger range of isolated frequencies. In conclusion, the study of the performance of expanded cork agglomerate blocks presented herein will allow for a more efficient application of expanded cork vibration isolators. This is particularly relevant since this material is a more sustainable alternative to other commonly used non-environmentally friendly products, such as rubber.

Keywords: expanded cork agglomerate, insulation cork board, transmissibility tests, sustainable materials, vibration isolators

Procedia PDF Downloads 332
347 Evaluation Method for Fouling Risk Using Quartz Crystal Microbalance

Authors: Natsuki Kishizawa, Keiko Nakano, Hussam Organji, Amer Shaiban, Mohammad Albeirutty

Abstract:

One of the most important tasks in operating desalination plants using a reverse osmosis (RO) method is preventing RO membrane fouling caused by foulants found in seawater. Optimal design of the pre-treatment process of RO process for plants enables the reduction of foulants. Therefore, a quantitative evaluation of the fouling risk in pre-treated water, which is fed to RO, is required for optimal design. Some measurement methods for water quality such as silt density index (SDI) and total organic carbon (TOC) have been conservatively applied for evaluations. However, these methods have not been effective in some situations for evaluating the fouling risk of RO feed water. Furthermore, stable management of plants will be possible by alerts and appropriate control of the pre-treatment process by using the method if it can be applied to the inline monitoring system for the fouling risk of RO feed water. The purpose of this study is to develop a method to evaluate the fouling risk of RO feed water. We applied a quartz crystal microbalance (QCM) to measure the amount of foulants found in seawater using a sensor whose surface is coated with polyamide thin film, which is the main material of a RO membrane. The increase of the weight of the sensor after a certain length of time in which the sample water passes indicates the fouling risk of the sample directly. We classified the values as “FP: Fouling Potential”. The characteristics of the method are to measure the very small amount of substances in seawater in a short time: < 2h, and from a small volume of the sample water: < 50mL. Using some RO cell filtration units, a higher correlation between the pressure increase given by RO fouling and the FP from the method than SDI and TOC was confirmed in the laboratory-scale test. Then, to establish the correlation in the actual bench-scale RO membrane module, and to confirm the feasibility of the monitoring system as a control tool for the pre-treatment process, we have started a long-term test at an experimental desalination site by the Red Sea in Jeddah, Kingdom of Saudi Arabia. Implementing inline equipment for the method made it possible to measure FP intermittently (4 times per day) and automatically. Moreover, for two 3-month long operations, the RO operation pressure among feed water samples of different qualities was compared. The pressure increase through a RO membrane module was observed at a high FP RO unit in which feed water was treated by a cartridge filter only. On the other hand, the pressure increase was not observed at a low FP RO unit in which feed water was treated by an ultra-filter during the operation. Therefore, the correlation in an actual scale RO membrane was established in two runs of two types of feed water. The result suggested that the FP method enables the evaluation of the fouling risk of RO feed water.

Keywords: fouling, monitoring, QCM, water quality

Procedia PDF Downloads 212
346 Secure Data Sharing of Electronic Health Records With Blockchain

Authors: Kenneth Harper

Abstract:

The secure sharing of Electronic Health Records (EHRs) is a critical challenge in modern healthcare, demanding solutions to enhance interoperability, privacy, and data integrity. Traditional standards like Health Information Exchange (HIE) and HL7 have made significant strides in facilitating data exchange between healthcare entities. However, these approaches rely on centralized architectures that are often vulnerable to data breaches, lack sufficient privacy measures, and have scalability issues. This paper proposes a framework for secure, decentralized sharing of EHRs using blockchain technology, cryptographic tokens, and Non-Fungible Tokens (NFTs). The blockchain's immutable ledger, decentralized control, and inherent security mechanisms are leveraged to improve transparency, accountability, and auditability in healthcare data exchanges. Furthermore, we introduce the concept of tokenizing patient data through NFTs, creating unique digital identifiers for each record, which allows for granular data access controls and proof of data ownership. These NFTs can also be employed to grant access to authorized parties, establishing a secure and transparent data sharing model that empowers both healthcare providers and patients. The proposed approach addresses common privacy concerns by employing privacy-preserving techniques such as zero-knowledge proofs (ZKPs) and homomorphic encryption to ensure that sensitive patient information can be shared without exposing the actual content of the data. This ensures compliance with regulations like HIPAA and GDPR. Additionally, the integration of Fast Healthcare Interoperability Resources (FHIR) with blockchain technology allows for enhanced interoperability, enabling healthcare organizations to exchange data seamlessly and securely across various systems while maintaining data governance and regulatory compliance. Through real-world case studies and simulations, this paper demonstrates how blockchain-based EHR sharing can reduce operational costs, improve patient outcomes, and enhance the security and privacy of healthcare data. This decentralized framework holds great potential for revolutionizing healthcare information exchange, providing a transparent, scalable, and secure method for managing patient data in a highly regulated environment.

Keywords: blockchain, electronic health records (ehrs), fast healthcare interoperability resources (fhir), health information exchange (hie), hl7, interoperability, non-fungible tokens (nfts), privacy-preserving techniques, tokens, secure data sharing,

Procedia PDF Downloads 21
345 Innovative Technologies Functional Methods of Dental Research

Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko

Abstract:

Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.

Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)

Procedia PDF Downloads 280
344 Research on the Performance Management of Social Organizations Participating in Home-Based Care

Authors: Qiuhu Shao

Abstract:

Community home-based care service system, which is based on the family pension, supported by community pension and supplied by institutions pension, is an effective pension system to solve the current situation of China's accelerated aging. However, due to the fundamental realities of our country, the government is not able to bear the unilateral supply of the old-age service of the community. Therefore, based on the theory of welfare pluralism, the participation of social organizations in the home-based care service center has become an important part of the diversified supply of the old-age service for the elderly. Meanwhile, the home-based care service industry is still in the early stage, the management is relatively rough, which resulted in a large number of social resources waste. Thus, scientific, objective and long-term implementation is needed for social organizations to participate in home-based care services to guide its performance management. In order to realize the design of the performance management system, the author has done a research work that clarifies the research status of social organization's participation in home-based care service. Relevant theories such as welfare pluralism, community care theory, and performance management theory have been used to demonstrate the feasibility of data envelopment analysis method in social organization performance research. This paper analyzes the characteristics of the operation mode of the home-based care service center, and hackles the national as well as local documents, standards and norms related to the development of the home-based care industry, particularly studies those documents in Nanjing. Based on this, the paper designed a set of performance management PDCA system for home-based care service center in Nanjing and clarified each step of the system in detail. Subsequently, the research methods of performance evaluation and performance management and feedback, which are two core steps of performance management have been compared and screened in order to establish the overall framework of the performance management system of the home-based care service center. Through a large number of research, the paper summarized and analyzed the characteristics of the home-based care service center. Based on the research results, combined with the practice of the industry development in Nanjing, the paper puts forward a targeted performance evaluation index system of home-based care service center in Nanjing. Finally, the paper evaluated and sub-filed the performance of 186 home-based care service centers in Nanjing and then designed the performance optimization direction and performance improvement path based on the results. This study constructs the index system of performance evaluation of home-based care service and makes the index detailed to the implementation level, and constructs the evaluation index system which can be applied directly. Meanwhile, the quantitative evaluation of social organizations participating in the home-based care service changed the subjective impression in the previous practice of evaluation.

Keywords: data envelopment analysis, home-based care, performance management, social organization

Procedia PDF Downloads 269
343 A Sustainable Pt/BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ Catalyst for Dry Reforming of Methane-Derived from Recycled Primary Pt

Authors: Alessio Varotto, Lorenzo Freschi, Umberto Pasqual Laverdura, Anastasia Moschovi, Davide Pumiglia, Iakovos Yakoumis, Marta Feroci, Maria Luisa Grilli

Abstract:

Dry reforming of Methane (DRM) is considered one of the most valuable technologies for green-house gas valorization thanks to the fact that through this reaction, it is possible to obtain syngas, a mixture of H₂ and CO in an H₂/CO ratio suitable for utilization in the Fischer-Tropsch process of high value-added chemicals and fuels. Challenges of the DRM process are the reduction of costs due to the high temperature of the process and the high cost of precious metals of the catalyst, the metal particles sintering, and carbon deposition on the catalysts’ surface. The aim of this study is to demonstrate the feasibility of the synthesis of catalysts using a leachate solution containing Pt coming directly from the recovery of spent diesel oxidation catalysts (DOCs) without further purification. An unusual perovskite support for DRM, the BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ (BCZG) perovskite, has been chosen as the catalyst support because of its high thermal stability and capability to produce oxygen vacancies, which suppress the carbon deposition and enhance the catalytic activity of the catalyst. BCZG perovskite has been synthesized by a sol-gel modified Pechini process and calcinated in air at 1100 °C. BCZG supports have been impregnated with a Pt-containing leachate solution of DOC, obtained by a mild hydrometallurgical recovery process, as reported elsewhere by some of the authors of this manuscript. For comparison reasons, a synthetic solution obtained by digesting commercial Pt-black powder in aqua regia was used for BCZG support impregnation. Pt nominal content was 2% in both BCZG-based catalysts formed by real and synthetic solutions. The structure and morphology of catalysts were characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). Thermogravimetric Analysis (TGA) was used to study the thermal stability of the catalyst’s samples. Brunauer-Emmett-Teller (BET) analysis provided a high surface area of the catalysts. H₂-TPR (Temperature Programmed Reduction) analysis was used to study the consumption of hydrogen for reducibility, and it was associated with H₂-TPD characterization to study the dispersion of Pt on the surface of the support and calculate the number of active sites used by the precious metal. Dry reforming of methane (DRM) reaction, carried out in a fixed bed reactor, showed a high conversion efficiency of CO₂ and CH4. At 850°C, CO₂ and CH₄ conversion were close to 100% for the catalyst obtained with the aqua regia-based solution of commercial Pt-black, and ~70% (for CH₄) and ~80 % (for CO₂) in the case of real HCl-based leachate solution. H₂/CO ratios were ~0.9 and ~0.70 in the first and latter cases, respectively. As far as we know, this is the first pioneering work in which a BCGZ catalyst and a real Pt-containing leachate solution were successfully employed for DRM reaction.

Keywords: dry reforming of methane, perovskite, PGM, recycled Pt, syngas

Procedia PDF Downloads 37
342 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry

Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia

Abstract:

Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.

Keywords: barriers, critical success factors, external factors, internal factors, quality management systems

Procedia PDF Downloads 186
341 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 51
340 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 79
339 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 134
338 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 75
337 A Theoretical Framework of Patient Autonomy in a High-Tech Care Context

Authors: Catharina Lindberg, Cecilia Fagerstrom, Ania Willman

Abstract:

Patients in high-tech care environments are usually dependent on both formal/informal caregivers and technology, highlighting their vulnerability and challenging their autonomy. Autonomy presumes that a person has education, experience, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could, therefore, be considered paradoxical, as in most cases these persons have impaired physical and/or metacognitive capacity. Therefore, to understand the prerequisites for patients to experience autonomy in high-tech care environments and to support them, there is a need to enhance knowledge and understanding of the concept of patient autonomy in this care context. The development of concepts and theories in a practice discipline such as nursing helps to improve both nursing care and nursing education. Theoretical development is important when clarifying a discipline, hence, a theoretical framework could be of use to nurses in high-tech care environments to support and defend the patient’s autonomy. A meta-synthesis was performed with the intention to be interpretative and not aggregative in nature. An amalgamation was made of the results from three previous studies, carried out by members of the same research group, focusing on the phenomenon of patient autonomy from a patient perspective within a caring context. Three basic approaches to theory development: derivation, synthesis, and analysis provided an operational structure that permitted the researchers to move back and forth between these approaches during their work in developing a theoretical framework. The results from the synthesis delineated that patient autonomy in a high-tech care context is: To be in control though trust, co-determination, and transition in everyday life. The theoretical framework contains several components creating the prerequisites for patient autonomy. Assumptions and propositional statements that guide theory development was also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients to remain or obtain patient autonomy in high-tech care environments were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. This study suggests an extended knowledge base founded on theoretical reasoning about patient autonomy, providing an understanding of the strategies used by patients to achieve autonomy in the role of patient, in high-tech care environments. When possessing knowledge about the patient perspective of autonomy, the nurse/carer can avoid adopting a paternalistic or maternalistic approach. Instead, the patient can be considered to be a partner in care, allowing care to be provided that supports him/her in remaining/becoming an autonomous person in the role of patient.

Keywords: autonomy, caring, concept development, high-tech care, theory development

Procedia PDF Downloads 207
336 Repair of Thermoplastic Composites for Structural Applications

Authors: Philippe Castaing, Thomas Jollivet

Abstract:

As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.

Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic

Procedia PDF Downloads 304
335 Promoting 'One Health' Surveillance and Response Approach Implementation Capabilities against Emerging Threats and Epidemics Crisis Impact in African Countries

Authors: Ernest Tambo, Ghislaine Madjou, Jeanne Y. Ngogang, Shenglan Tang, Zhou XiaoNong

Abstract:

Implementing national to community-based 'One Health' surveillance approach for human, animal and environmental consequences mitigation offers great opportunities and value-added in sustainable development and wellbeing. 'One Health' surveillance approach global partnerships, policy commitment and financial investment are much needed in addressing the evolving threats and epidemics crises mitigation in African countries. The paper provides insights onto how China-Africa health development cooperation in promoting “One Health” surveillance approach in response advocacy and mitigation. China-Africa health development initiatives provide new prospects in guiding and moving forward appropriate and evidence-based advocacy and mitigation management approaches and strategies in attaining Universal Health Coverage (UHC) and Sustainable Development Goals (SDGs). Early and continuous quality and timely surveillance data collection and coordinated information sharing practices in malaria and other diseases are demonstrated in Comoros, Zanzibar, Ghana and Cameroon. Improvements of variety of access to contextual sources and network of data sharing platforms are needed in guiding evidence-based and tailored detection and response to unusual hazardous events. Moreover, understanding threats and diseases trends, frontline or point of care response delivery is crucial to promote integrated and sustainable targeted local, national “One Health” surveillance and response approach needs implementation. Importantly, operational guidelines are vital in increasing coherent financing and national workforce capacity development mechanisms. Strengthening participatory partnerships, collaboration and monitoring strategies in achieving global health agenda effectiveness in Africa. At the same enhancing surveillance data information streams reporting and dissemination usefulness in informing policies decisions, health systems programming and financial mobilization and prioritized allocation pre, during and post threats and epidemics crises programs strengths and weaknesses. Thus, capitalizing on “One Health” surveillance and response approach advocacy and mitigation implementation is timely in consolidating Africa Union 2063 agenda and Africa renaissance capabilities and expectations.

Keywords: Africa, one health approach, surveillance, response

Procedia PDF Downloads 421
334 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!

Authors: Raj Chavan

Abstract:

At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.

Keywords: densification, intensification, nutrient removal, granular sludge

Procedia PDF Downloads 73
333 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach

Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené

Abstract:

Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.

Keywords: agency theory, credit risk, internal controls, revised COSO framework

Procedia PDF Downloads 316
332 Performance Improvement of a Single-Flash Geothermal Power Plant Design in Iran: Combining with Gas Turbines and CHP Systems

Authors: Morteza Sharifhasan, Davoud Hosseini, Mohammad. R. Salimpour

Abstract:

The geothermal energy is considered as a worldwide important renewable energy in recent years due to rising environmental pollution concerns. Low- and medium-grade geothermal heat (< 200 ºC) is commonly employed for space heating and in domestic hot water supply. However, there is also much interest in converting the abundant low- and medium-grade geothermal heat into electrical power. The Iranian Ministry of Power - through the Iran Renewable Energy Organization (SUNA) – is going to build the first Geothermal Power Plant (GPP) in Iran in the Sabalan area in the Northwest of Iran. This project is a 5.5 MWe single flash steam condensing power plant. The efficiency of GPPs is low due to the relatively low pressure and temperature of the saturated steam. In addition to GPPs, Gas Turbines (GTs) are also known by their relatively low efficiency. The Iran ministry of Power is trying to increase the efficiency of these GTs by adding bottoming steam cycles to the GT to form what is known as combined gas/steam cycle. One of the most effective methods for increasing the efficiency is combined heat and power (CHP). This paper investigates the feasibility of superheating the saturated steam that enters the steam turbine of the Sabalan GPP (SGPP-1) to improve the energy efficiency and power output of the GPP. This purpose is achieved by combining the GPP with two 3.5 MWe GTs. In this method, the hot gases leaving GTs are utilized through a superheater similar to that used in the heat recovery steam generator of combined gas/steam cycle. Moreover, brine separated in the separator, hot gases leaving GTs and superheater are used for the supply of domestic hot water (in this paper, the cycle combined of GTs and CHP systems is named the modified SGPP-1) . In this research, based on the Heat Balance presented in the basic design documents of the SGPP-1, mathematical/numerical model of the power plant are developed together with the mentioned GTs and CHP systems. Based on the required hot water, the amount of hot gasses needed to pass through CHP section directly can be adjusted. For example, during summer when hot water is less required, the hot gases leaving both GTs pass through the superheater and CHP systems respectively. On the contrary, in order to supply the required hot water during the winter, the hot gases of one of the GTs enter the CHP section directly, without passing through the super heater section. The results show that there is an increase in thermal efficiency up to 40% through using the modified SGPP-1. Since the gross efficiency of SGPP-1 is 9.6%, the achieved increase in thermal efficiency is significant. The power output of SGPP-1 is increased up to 40% in summer (from 5.5MW to 7.7 MW) while the GTs power output remains almost unchanged. Meanwhile, the combined-cycle power output increases from the power output of the two separate plants of 12.5 MW [5.5+ (2×3.5)] to the combined-cycle power output of 14.7 [7.7+(2×3.5)]. This output is more than 17% above the output of the two separate plants. The modified SGPP-1 is capable of producing 215 T/Hr hot water ( 90 ºC ) for domestic use in the winter months.

Keywords: combined cycle, chp, efficiency, gas turbine, geothermal power plant, gas turbine, power output

Procedia PDF Downloads 322
331 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 161
330 Development of a Bi-National Thyroid Cancer Clinical Quality Registry

Authors: Liane J. Ioannou, Jonathan Serpell, Joanne Dean, Cino Bendinelli, Jenny Gough, Dean Lisewski, Julie Miller, Win Meyer-Rochow, Stan Sidhu, Duncan Topliss, David Walters, John Zalcberg, Susannah Ahern

Abstract:

Background: The occurrence of thyroid cancer is increasing throughout the developed world, including Australia and New Zealand, and since the 1990s has become the fastest increasing malignancy. Following the success of a number of institutional databases that monitor outcomes after thyroid surgery, the Australian and New Zealand Endocrine Surgeons (ANZES) agreed to auspice the development of a bi-national thyroid cancer registry. Objectives: To establish a bi-national population-based clinical quality registry with the aim of monitoring and improving the quality of care provided to patients diagnosed with thyroid cancer in Australia and New Zealand. Patients and Methods: The Australian and New Zealand Thyroid Cancer Registry (ANZTCR) captures clinical data for all patients, over the age of 18 years, diagnosed with thyroid cancer, confirmed by histopathology report, that have been diagnosed, assessed or treated at a contributing hospital. Data is collected by endocrine surgeons using a web-based interface, REDCap, primarily via direct data entry. Results: A multi-disciplinary Steering Committee was formed, and with operational support from Monash University the ANZTCR was established in early 2017. The pilot phase of the registry is currently operating in Victoria, New South Wales, Queensland, Western Australia and South Australia, with over 30 sites expected to come on board across Australia and New Zealand in 2018. A modified-Delphi process was undertaken to determine the key quality indicators to be reported by the registry, and a minimum dataset was developed comprising information regarding thyroid cancer diagnosis, pathology, surgery, and 30-day follow up. Conclusion: There are very few established thyroid cancer registries internationally, yet clinical quality registries have shown valuable outcomes and patient benefits in other cancers. The establishment of the ANZTCR provides the opportunity for Australia and New Zealand to further understand the current practice in the treatment of thyroid cancer and reasons for variation in outcomes. The engagement of endocrine surgeons in supporting this initiative is crucial. While the pilot registry has a focus on early clinical outcomes, it is anticipated that future collection of longer-term outcome data particularly for patients with the poor prognostic disease will add significant further value to the registry.

Keywords: thyroid cancer, clinical registry, population health, quality improvement

Procedia PDF Downloads 192
329 Carbon Dioxide Capture and Utilization by Using Seawater-Based Industrial Wastewater and Alkanolamine Absorbents

Authors: Dongwoo Kang, Yunsung Yoo, Injun Kim, Jongin Lee, Jinwon Park

Abstract:

Since industrial revolution, energy usage by human-beings has been drastically increased resulting in the enormous emissions of carbon dioxide into the atmosphere. High concentration of carbon dioxide is well recognized as the main reason for the climate change by breaking the heat equilibrium of the earth. In order to decrease the amount of carbon dioxide emission, lots of technologies have been developed. One of the methods is to capture carbon dioxide after combustion process using liquid type absorbents. However, for some nations, captured carbon dioxide cannot be treated and stored properly due to their geological structures. Also, captured carbon dioxide can be leaked out when crust activities are active. Hence, the method to convert carbon dioxide as stable and useful products were developed. It is usually called CCU, that is, Carbon Capture and Utilization. There are several ways to convert carbon dioxide into useful substances. For example, carbon dioxide can be converted and used as fuels such as diesel, plastics, and polymers. However, these types of technologies require lots of energy to make stable carbon dioxide into a reactive one. Hence, converting it into metal carbonates salts have been studied widely. When carbon dioxide is captured by alkanolamine-based liquid absorbents, it exists as ionic forms such as carbonate, carbamate, and bicarbonate. When adequate metal ions are added, metal carbonate salt can be produced by ionic reaction with fast reaction kinetics. However, finding metal sources can be one of the problems for this method to be commercialized. If natural resources such as calcium oxide were used to supply calcium ions, it is not thought to have the economic feasibility to use natural resources to treat carbon dioxide. In this research, high concentrated industrial wastewater produced from refined salt production facility have been used as metal supplying source, especially for calcium cations. To ensure purity of final products, calcium ions were selectively separated in the form of gypsum dihydrate. After that, carbon dioxide is captured using alkanolamine-based absorbents making carbon dioxide into reactive ionic form. And then, high purity calcium carbonate salt was produced. The existence of calcium carbonate was confirmed by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM) images. Also, carbon dioxide loading curves for absorption, conversion, and desorption were provided. Also, in order to investigate the possibility of the absorbent reuse, reabsorption experiments were performed either. Produced calcium carbonate as final products is seemed to have potential to be used in various industrial fields including cement and paper making industries and pharmaceutical engineering fields.

Keywords: alkanolamine, calcium carbonate, climate change, seawater, industrial wastewater

Procedia PDF Downloads 185
328 Carbon-Supported Pd Nano-Particles as Green Catalysts for the Production of Fuels from Biomass

Authors: Andrea Dragu, Solen Kinayyigit, Valerie Colliere, Karin Karin Philippot, Camelia Bala, Vasile I. Parvulescu

Abstract:

The production of transportation fuels from biomass has gained a growing attention due to diminishing fossil fuel reserves, rising petroleum prices and increasing concern about global warming. In recent years, renewable hydrocarbons that are completely fungible with fossil fuels have been suggested to be efficiently produced by catalytic deoxygenation of fatty acids and their derivatives viadecarboxylation / decarbonylation. Several triglycerides (tall oil fatty acids) and saturated/unsaturated fatty acids and their corresponding esters were used as feedstocks. Their impact together with the influence of the reaction conditions and the catalyst composition on the nature of the reaction pathways of the deoxygenation of vegetable oils and their derivatives were recently reviewed. Following this state of the art the aim of the present study was the investigation of Pd NPs deposited onto mesoporous carbon supports as active and stable catalysts for the deoxygenation of oleic acid. The catalysts were prepared by the deposition of Pd NPs synthesised following an organometallic route on mesoporous carbons with different characteristics. Experiments were carried out under both batch and flow conditions. They demonstrated that under batch conditions (200 atm; 573K), the extent of the reaction depended, firstly, on the Pd loading and then on the metal dispersion and the oxidation state of palladium, both influenced by the way the support has been treated before the NPs deposition and by the preparation/stabilization methodology of Pd NPs. No aromatic compounds were detected in the reaction products but octadecanol and octadecane were observed in large extents. Under flow conditions (4 atm; 573 K), the conversion of stearic acid was superior to that observed in batch conditions. The product mixture contained over 20% heptadecane. No octadecanol, octadecane, and aromatic compounds were detected. The maxima in performances are obtained after only 0.5 h. After that, the yields in heptadecane suffer from a severe decrease until 3h reaction time. However, at that time, stopping feeding the reactor with oleic acid and flushing the catalyst only with mesitylene recovered the activity and the selectivity of the catalysts. With the complete removal of H2, the analysis revealed the presence of heptadecene in high excess compared to heptadecane (almost 7 to 1), thus suggesting decarbonylation as the main route. ICP-OES measurements indicated no leaching of palladium and simple washing of catalysts with mesitylene allowed recycling without any change in conversion or product distribution. Noteworthy, mesitylene as solvent exhibited no effect in this reaction. In conclusion, this study demonstrates the feasibility of such catalysts for the green production of fuels from biomass.

Keywords: fuels from biomass, green catalyst, Pd nano-particles , recycble catalyst

Procedia PDF Downloads 302
327 The Role of Emotions in Addressing Social and Environmental Issues in Ethical Decision Making

Authors: Kirsi Snellman, Johannes Gartner, , Katja Upadaya

Abstract:

A transition towards a future where the economy serves society so that it evolves within the safe operating space of the planet calls for fundamental changes in the way managers think, feel and act, and make decisions that relate to social and environmental issues. Sustainable decision-making in organizations are often challenging tasks characterized by trade-offs between environmental, social and financial aspects, thus often bringing forth ethical concerns. Although there have been significant developments in incorporating uncertainty into environmental decision-making and measuring constructs and dimensions in ethical behavior in organizations, the majority of sustainable decision-making models are rationalist-based. Moreover, research in psychology indicates that one’s readiness to make a decision depends on the individual’s state of mind, the feasibility of the implied change, and the compatibility of strategies and tactics of implementation. Although very informative, most of this extant research is limited in the sense that it often directs attention towards the rational instead of the emotional. Hence, little is known about the role of emotions in sustainable decision making, especially in situations where decision-makers evaluate a variety of options and use their feelings as a source of information in tackling the uncertainty. To fill this lacuna, and to embrace the uncertainty and perceived risk involved in decisions that touch upon social and environmental aspects, it is important to add emotion to the evaluation when aiming to reach the one right and good ethical decision outcome. This analysis builds on recent findings in moral psychology that associate feelings and intuitions with ethical decisions and suggests that emotions can sensitize the manager to evaluate the rightness or wrongness of alternatives if ethical concerns are present in sustainable decision making. Capturing such sensitive evaluation as triggered by intuitions, we suggest that rational justification can be complemented by using emotions as a tool to tune in to what feels right in making sustainable decisions. This analysis integrates ethical decision-making theories with recent advancements in emotion theories. It determines the conditions under which emotions play a role in sustainability decisions by contributing to a personal equilibrium in which intuition and rationality are both activated and in accord. It complements the rationalist ethics view according to which nothing fogs the mind in decision making so thoroughly as emotion, and the concept of cheater’s high that links unethical behavior with positive affect. This analysis contributes to theory with a novel theoretical model that specifies when and why managers, who are more emotional, are, in fact, more likely to make ethical decisions than those managers who are more rational. It also proposes practical advice on how emotions can convert the manager’s preferences into choices that benefit both common good and one’s own good throughout the transition towards a more sustainable future.

Keywords: emotion, ethical decision making, intuition, sustainability

Procedia PDF Downloads 132
326 Numerical Investigation of Plasma-Fuel System (PFS) for Coal Ignition and Combustion

Authors: Vladimir Messerle, Alexandr Ustimenko, Oleg Lavrichshev

Abstract:

To enhance the efficiency of solid fuels’ use, to decrease the fuel oil rate in the thermal power plants fuel balance and to minimize harmful emissions, a plasma technology of coal ignition, gasification and incineration is successfully applied. This technology is plasma thermochemical preparation of fuel for burning (PTCPF). In the framework of this concept, some portion of pulverized solid fuel (PF) is separated from the main PF flow and undergone the activation by arc plasma in a specific chamber with plasma torch – PFS. The air plasma flame is a source of heat and additional oxidation, it provides a high-temperature medium enriched with radicals, where the fuel mixture is heated, volatile components of coal are extracted, and carbon is partially gasified. This active blended fuel can ignite the main PF flow supplied into the furnace. This technology provides the boiler start-up and stabilization of PF flame and eliminates the necessity for addition of highly reactive fuel. In the report, a model of PTCPF, implemented as a program PlasmaKinTherm for the PFS calculation is described. The model combines thermodynamic and kinetic methods for describing the process of PTCPF in PFS. The numerical investigation of operational parameters of PFS depending on the electric power of the plasma generator and steam coal ash content revealed the temperature and velocity of gas and coal particles, and concentrations of PTCPF products dependences on the PFS length. Main mechanisms of PTCPF were disclosed. It was found that in the range of electric power of plasma generator from 40 to 100 kW high ash bituminous coal, having consumption 1667 kg/h is ignited stably. High level of temperature (1740 K) and concentration of combustible components (44%) at the PFS exit is a confirmation of it. Augmentation in power of plasma generator results displacement maxima temperatures and speeds of PTCPF products upstream (in the direction of the plasma source). The maximum temperature and velocity vary in a narrow range of values and practically do not depend on the power of the plasma torch. The numerical study of indicators of the process of PTCPF depending on the ash content in the range of its values 20-70% demonstrated that at the exit of PFS concentration of combustible components decreases with an increase in coal ash, the temperature of the gaseous products is increasing, and coal carbon conversion rate is increased to a maximum value when the ash content of 60%, dramatically decreasing with further increase in the ash content.

Keywords: coal, efficiency, ignition, numerical modeling, plasma generator, plasma-fuel system

Procedia PDF Downloads 298
325 When the Lights Go Down in the Delivery Room: Lessons From a Ransomware Attack

Authors: Rinat Gabbay-Benziv, Merav Ben-Natan, Ariel Roguin, Benyamine Abbou, Anna Ofir, Adi Klein, Dikla Dahan-Shriki, Mordechai Hallak, Boris Kessel, Mickey Dudkiewicz

Abstract:

Introduction: Over recent decades, technology has become integral to healthcare, with electronic health records and advanced medical equipment now standard. However, this reliance has made healthcare systems increasingly vulnerable to ransomware attacks. On October 13, 2021, Hillel Yaffe Medical Center experienced a severe ransomware attack that disrupted all IT systems, including electronic health records, laboratory services, and staff communications. The attack, carried out by the group DeepBlueMagic, utilized advanced encryption to lock the hospital's systems and demanded a ransom. This incident caused significant operational and patient care challenges, particularly impacting the obstetrics department. Objective: The objective is to describe the challenges facing the obstetric division following a cyberattack and discuss ways of preparing for and overcoming another one. Methods: A retrospective descriptive study was conducted in a mid-sized medical center. Division activities, including the number of deliveries, cesarean sections, emergency room visits, admissions, maternal-fetal medicine department occupancy, and ambulatory encounters, from 2 weeks before the attack to 8 weeks following it (a total of 11 weeks), were compared with the retrospective period in 2019 (pre-COVID-19). In addition, we present the challenges and adaptation measures taken at the division and hospital levels leading up to the resumption of full division activity. Results: On the day of the cyberattack, critical decisions were made. The media announced the event, calling on patients not to come to our hospital. Also, all elective activities other than cesarean deliveries were stopped. The number of deliveries, admissions, and both emergency room and ambulatory clinic visits decreased by 5%–10% overall for 11 weeks, reflecting the decrease in division activity. Nevertheless, in all stations, there were sufficient activities and adaptation measures to ensure patient safety, decision-making, and workflow of patients were accounted for. Conclusions: The risk of ransomware cyberattacks is growing. Healthcare systems at all levels should recognize this threat and have protocols for dealing with them once they occur.

Keywords: ransomware attack, healthcare cybersecurity, obstetrics challenges, IT system disruption

Procedia PDF Downloads 24
324 Use of Low-Cost Hydrated Hydrogen Sulphate-Based Protic Ionic Liquids for Extraction of Cellulose-Rich Materials from Common Wheat (Triticum Aestivum) Straw

Authors: Chris Miskelly, Eoin Cunningham, Beatrice Smyth, John. D. Holbrey, Gosia Swadzba-Kwasny, Emily L. Byrne, Yoan Delavoux, Mantian Li.

Abstract:

Recently, the use of ionic liquids (ILs) for the preparation of lignocellulose derived cellulosic materials as alternatives to petrochemical feedstocks has been the focus of considerable research interest. While the technical viability of IL-based lignocellulose treatment methodologies has been well established, the high cost of reagents inhibits commercial feasibility. This work aimed to assess the technoeconomic viability of the preparation of cellulose rich materials (CRMs) using protic ionic liquids (PILs) synthesized from low cost alkylamines and sulphuric acid. For this purpose, the tertiary alkylamines, triethylamine, and dimethylbutylamine were selected. Bulk scale production cost of the synthesized PILs, triethylammonium hydrogen sulphate and dimetheylbutylammonium hydrogen sulphate, was reported as $0.78 kg-1 to $1.24 kg-1. CRMs were prepared through the treatment of common wheat (Triticum aestivum) straw with these PILs. By controlling treatment parameters, CRMs with a cellulose content of ≥ 80 wt% were prepared. This was achieved using a T. aestivum straw to PIL loading ratio of 1:15 w/w, a treatment duration of 180 minutes, and ethanol as a cellulose antisolvent. Infrared spectra data and decreased onset degradation temperature of CRMs (ΔTONSET ~ 70 °C) suggested the formation of cellulose sulphate esters during treatment. Chemical derivatisation can aid the dispersion of prepared CRMs in non-polar polymer/ composite matrices, but act as a barrier to thermal processing at temperatures above 150 °C. It was also shown that treatment increased the crystallinity of CRMs (ΔCrI ~ 40 %) without altering the native crystalline structure or crystallite size (~ 2.6 nm) of cellulose; peaks associated with the cellulose I crystalline planes (110), (200), and (004) were observed at Bragg angles 16.0 °, 22.5 ° and 35.0 ° respectively. This highlighted the inability of assessed PILs to dissolve crystalline cellulose and was attributed to the high acidity (pKa ~ - 1.92 to - 6.42) of sulphuric acid derived anions. Electron micrographs revealed that the stratified multilayer tissue structure of untreated T. aestivum straw was significantly modified during treatment. T. aestivum straw particles were disassembled during treatment, with prepared CRMs adopting a golden-brown film-like appearance. This work demonstrated the degradation of non-cellulosic fractions of lignocellulose without dissolution of cellulose. It is the first to report on the derivatisation of cellulose during treatment with protic hydrogen sulphate ionic liquids, and the potential implications of this with reference to biopolymer feedstock preparation.

Keywords: cellulose, extraction, protic ionic liquids, esterification, thermal stability, waste valorisation, biopolymer feedstock

Procedia PDF Downloads 36
323 Root Cause Analysis of a Catastrophically Failed Output Pin Bush Coupling of a Raw Material Conveyor Belt

Authors: Kaushal Kishore, Suman Mukhopadhyay, Susovan Das, Manashi Adhikary, Sandip Bhattacharyya

Abstract:

In integrated steel plants, conveyor belts are widely used for transferring raw materials from one location to another. An output pin bush coupling attached with a conveyor transferring iron ore fines and fluxes failed after two years of service life. This led to an operational delay of approximately 15 hours. This study is focused on failure analysis of the coupling and recommending counter-measures to prevent any such failures in the future. Investigation consisted of careful visual observation, checking of operating parameters, stress calculation and analysis, macro and micro-fractography, material characterizations like chemical and metallurgical analysis and tensile and impact testings. The fracture occurred from an unusually sharp double step. There were multiple corrosion pits near the step that aggravated the situation. Inner contact surface of the coupling revealed differential abrasion that created a macroscopic difference in the height of the component. This pointed towards misalignment of the coupling beyond a threshold limit. In addition to these design and installation issues, material of the coupling did not meet the quality standards. These were made up of grey cast iron having graphite morphology intermediate between random distribution (Type A) and rosette pattern (Type B). This manifested as a marked reduction in impact toughness and tensile strength of the component. These findings corroborated well with the brittle mode of fracture that might have occurred during minor impact loading while loading of conveyor belt with raw materials from height. Simulated study was conducted to examine the effect of corrosion pits on tensile and impact toughness of grey cast iron. It was observed that pitting marginally reduced tensile strength and ductility. However, there was marked (up to 45%) reduction in impact toughness due to pitting. Thus, it became evident that failure of the coupling occurred due to combination of factors like inferior material, misalignment, poor step design and corrosion pitting. Recommendation for life enhancement of coupling included the use of tougher SG 500/7 grade, incorporation of proper fillet radius for the step, correction of alignment and application of corrosion resistant organic coating to prevent pitting.

Keywords: brittle fracture, cast iron, coupling, double step, pitting, simulated impact tests

Procedia PDF Downloads 132
322 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 137
321 Epidemiological Data of Schistosoma haematobium Bilharzia in Rural and Urban Localities in the Republic of Congo

Authors: Jean Akiana, Digne Merveille Nganga Bouanga, Nardiouf Sjelin Nsana, Wilfrid Sapromet Ngoubili, Chyvanelle Ndous Akiridzo, Vishnou Reize Ampiri, Henri-Joseph Parra, Florence Fenollar, Didier Raoult, Oleg Mediannikov, Cheikh Sadhibou Sokhna

Abstract:

Schistosoma haematobium schistosomiasis is an endemic disease in which the level of human exposure, incidence, and fatality attributed to it remains, unfortunately, high worldwide. The erection of hydroelectric infrastructures constitute a major factor in the emergence of this disease. In the context of the Republic of the Congo, which considers industrialization and modernization as two essential pillars of development, building the hydroelectric dams of Liouesso (19 Mw) and the feasibility studies of the dams of Chollet (600MW) in the Sangha, of Sounda (1000MW) in Kouilou and Kouembali (150MW) on Lefini is necessary to increase the country's energy capacities. Likewise, the urbanization of former endemic localities should take into account the maintenance of contamination points. However, health impact studies on schistosomiasis epidemiology in general and urinary bilharzia, in particular, have never been carried out in these areas, neither before nor after the erection of those dams. Participants benefited from an investigative questionnaire, urinalysis both by dipstick and urine filtrate examined under a microscope. Assessment of the genetic diversity of schistosoma species populations was considered as well as PCR analysis to confirm the test strip and microscopy tests. 405 participants were registered in five localities. The sampling was made up of a balanced population in terms of male/female ratio, which is around 1. The prevalence rate was 45% (55/123) in Nkayi, 10.40% (11/106) in Loudima, 1 case in Mbomo (West Cuvette), which would probably be imported, zero in Liouesso and Kabo. The highest oviuria (number of eggs per volume of urine) is 150 S. haematobium eggs/10ml in Nkayi, apart from the case of imported Mbomo, imported from Gabon, which has 160 S. haematobium eggs/10ml. The lowest oviuria was 2 S. haematobium eggs/10ml. Prevalence rates are still high in semi-urban areas (Nkayi). As praziquantel treatments are available and effective, it is important to step up mass treatment campaigns in high risk areas already largely initiated by the National Schistosomiasis Control Program. Prevalence rates are still high in semi-urban areas (Nkayi). As praziquantel treatments are available and effective, it is important to step up mass treatment campaigns in high risk areas already largely initiated by the National Schistosomiasis Control Program.

Keywords: Bilharzia, Schistosoma haematobium, oviuria, urbanization, Congo

Procedia PDF Downloads 149