Search results for: optimizing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 743

Search results for: optimizing

173 Dynamic Simulation of a Hybrid Wind Farm with Wind Turbines and Distributed Compressed Air Energy Storage System

Authors: Eronini Iheanyi Umez-Eronini

Abstract:

Most studies and existing implementations of compressed air energy storage (CAES) coupled with a wind farm to overcome intermittency and variability of wind power are based on bulk or centralized CAES plants. A dynamic model of a hybrid wind farm with wind turbines and distributed CAES, consisting of air storage tanks and compressor and expander trains at each wind turbine station, is developed and simulated in MATLAB. An ad hoc supervisory controller, in which the wind turbines are simply operated under classical power optimizing region control while scheduling power production by the expanders and air storage by the compressors, including modulation of the compressor power levels within a control range, is used to regulate overall farm power production to track minute-scale (3-minutes sampling period) TSO absolute power reference signal, over an eight-hour period. Simulation results for real wind data input with a simple wake field model applied to a hybrid plant composed of ten 5-MW wind turbines in a row and ten compatibly sized and configured Diabatic CAES stations show the plant controller is able to track the power demand signal within an error band size on the order of the electrical power rating of a single expander. This performance suggests that much improved results should be anticipated when the global D-CAES control is combined with power regulation for the individual wind turbines using available approaches for wind farm active power control. For standalone power plant fuel electrical efficiency estimate of up to 60%, the round trip electrical storage efficiency computed for the distributed CAES wherein heat generated by running compressors is utilized in the preheat stage of running high pressure expanders while fuel is introduced and combusted before the low pressure expanders, was comparable to reported round trip storage electrical efficiencies for bulk Adiabatic CAES.

Keywords: hybrid wind farm, distributed CAES, diabatic CAES, active power control, dynamic modeling and simulation

Procedia PDF Downloads 49
172 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 158
171 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 35
170 Test Procedures for Assessing the Peel Strength and Cleavage Resistance of Adhesively Bonded Joints with Elastic Adhesives under Detrimental Service Conditions

Authors: Johannes Barlang

Abstract:

Adhesive bonding plays a pivotal role in various industrial applications, ranging from automotive manufacturing to aerospace engineering. The peel strength of adhesives, a critical parameter reflecting the ability of an adhesive to withstand external forces, is crucial for ensuring the integrity and durability of bonded joints. This study provides a synopsis of the methodologies, influencing factors, and significance of peel testing in the evaluation of adhesive performance. Peel testing involves the measurement of the force required to separate two bonded substrates under controlled conditions. This study systematically reviews the different testing techniques commonly applied in peel testing, including the widely used 180-degree peel test and the T-peel test. Emphasis is placed on the importance of selecting an appropriate testing method based on the specific characteristics of the adhesive and the application requirements. The influencing factors on peel strength are multifaceted, encompassing adhesive properties, substrate characteristics, environmental conditions, and test parameters. Through an in-depth analysis, this study explores how factors such as adhesive formulation, surface preparation, temperature, and peel rate can significantly impact the peel strength of adhesively bonded joints. Understanding these factors is essential for optimizing adhesive selection and application processes in real-world scenarios. Furthermore, the study highlights the role of peel testing in quality control and assurance, aiding manufacturers in maintaining consistent adhesive performance and ensuring the reliability of bonded structures. The correlation between peel strength and long-term durability is discussed, shedding light on the predictive capabilities of peel testing in assessing the service life of adhesive bonds. In conclusion, this study underscores the significance of peel testing as a fundamental tool for characterizing adhesive performance. By delving into testing methodologies, influencing factors, and practical implications, this study contributes to the broader understanding of adhesive behavior and fosters advancements in adhesive technology across diverse industrial sectors.

Keywords: adhesively bonded joints, cleavage resistance, elastic adhesives, peel strength

Procedia PDF Downloads 54
169 Inversely Designed Chipless Radio Frequency Identification (RFID) Tags Using Deep Learning

Authors: Madhawa Basnayaka, Jouni Paltakari

Abstract:

Fully passive backscattering chipless RFID tags are an emerging wireless technology with low cost, higher reading distance, and fast automatic identification without human interference, unlike already available technologies like optical barcodes. The design optimization of chipless RFID tags is crucial as it requires replacing integrated chips found in conventional RFID tags with printed geometric designs. These designs enable data encoding and decoding through backscattered electromagnetic (EM) signatures. The applications of chipless RFID tags have been limited due to the constraints of data encoding capacity and the ability to design accurate yet efficient configurations. The traditional approach to accomplishing design parameters for a desired EM response involves iterative adjustment of design parameters and simulating until the desired EM spectrum is achieved. However, traditional numerical simulation methods encounter limitations in optimizing design parameters efficiently due to the speed and resource consumption. In this work, a deep learning neural network (DNN) is utilized to establish a correlation between the EM spectrum and the dimensional parameters of nested centric rings, specifically square and octagonal. The proposed bi-directional DNN has two simultaneously running neural networks, namely spectrum prediction and design parameters prediction. First, spectrum prediction DNN was trained to minimize mean square error (MSE). After the training process was completed, the spectrum prediction DNN was able to accurately predict the EM spectrum according to the input design parameters within a few seconds. Then, the trained spectrum prediction DNN was connected to the design parameters prediction DNN and trained two networks simultaneously. For the first time in chipless tag design, design parameters were predicted accurately after training bi-directional DNN for a desired EM spectrum. The model was evaluated using a randomly generated spectrum and the tag was manufactured using the predicted geometrical parameters. The manufactured tags were successfully tested in the laboratory. The amount of iterative computer simulations has been significantly decreased by this approach. Therefore, highly efficient but ultrafast bi-directional DNN models allow rapid and complicated chipless RFID tag designs.

Keywords: artificial intelligence, chipless RFID, deep learning, machine learning

Procedia PDF Downloads 20
168 Combined Mindfulness and Exercise Intervention for Depressive and Insomnia Symptoms in Chinese Students: A Pilot Randomized Controlled Trial

Authors: Xinli Chi, Xiaoqi Wei

Abstract:

Background: Body-mind theory refers to the concept that the mind and body are interconnected; in this case, combining aerobic exercise and mindfulness-based training may be beneficial for mind-body health; however, there is limited evidence regarding their effects and potential mechanisms among Chinese university students. Therefore, the current study aims to examine the preliminary effects and feasibility of the combined intervention on depressive and insomnia symptoms, as well as to explore the underlying mechanisms. Methods: This is a two-arm pilot study of a randomized, controlled trial. Sixty-one Chinese university students were randomly allocated to 8-week combined intervention group (aerobic exercise plus mindfulness, N = 36) or control group (N = 36). In addition, 8 participants in combined intervention group were later volunteer to engage in semi-structured interview. The Self-Rating Depression Scale (SDS) and the Youth Self-Rating Insomnia Scales (YSIS) were used to measure depressive and insomnia symptoms, respectively. The intervention outcome and feasibility were tested by repeated-measures ANOVA, mediation model, and qualitative analysis. Results: The study included 31 participants in the intervention group and 30 participants in the control group, all of whom completed pre-test and post-test questionnaires. The results of the repeated-measures ANOVA showed that the combined intervention was effective in reducing depressive and insomnia symptoms among university students. Moreover, the mediation analysis suggested that improvement in insomnia symptoms might be a significant mechanism for the combined intervention. Qualitative analysis identified two main themes: “Helpful aspects of mind-body state” (including 7 sub-themes) and “Factors that influence the training effects” (including 3 sub-themes). Conclusions: The study confirmed the preliminary effect and feasibility of the combined intervention of mindfulness and aerobic exercise, while also exploring the potential mechanisms underlying this effect. Additionally, qualitative data provided valuable insights for optimizing future protocols.

Keywords: combined intervention, mindfulness, aerobic exercise, depressive symptoms, insomnia symptoms

Procedia PDF Downloads 66
167 Considerations When Using the Beach Chair Position for Surgery

Authors: Aniko Babits, Ahmad Daoud

Abstract:

Introduction: The beach chair position (BCP) is a good approach to almost all types of shoulder procedures. However, moving an anaesthetized patient from the supine to sitting position may pose a risk of cerebral hypoperfusion and potential cerebral ischaemia as a result of significant reductions in blood pressure and cardiac output. Hypocapnia in ventilated patients and impaired blood flow to the vertebral artery due to hyperextension, rotation, or tilt of the head may have an impact too. Co-morbidities that may increase the risk of cerebral ischaemia in the BCP include diabetes with autonomic neuropathy, cerebrovascular disease, cardiac disease, severe hypertension, generalized vascular disease, history of fainting, and febrile conditions. Beach chair surgery requires a careful anaesthetic and surgical management to optimize patient safety and minimize the risk of adverse outcomes. Methods: We describe the necessary steps for optimal patient positioning and the aims of intraoperative management, including anaesthetic techniques to ensure patient safety in the BCP. Results: Regardless of the anaesthetic technique, adequate patient positioning is paramount in the BCP. The key steps to BCP are aimed at optimizing surgical success and minimizing the risk of severe neurovascular complications. The primary aim of anaesthetic management is to maintain cardiac output and mean arterial pressure (MAP) to protect cerebral perfusion. Blood pressure management includes treating a fall in MAP of more than 25% from baseline or a MAP less than 70 mmHg. This can be achieved by using intravenous fluids or vasopressors. A number of anaesthetic techniques could also improve cerebral oxygenation, including avoidance of intermittent positive pressure ventilation (IPPV) with general anaesthesia (GA), using regional anaesthesia, maintaining normocapnia and normothermia, and the application of compression stockings. Conclusions: In summary, BCP is a reliable and effective position to perform shoulder procedures. Simple steps to patient positioning and careful anaesthetic management could maximize patient safety and avoid unwanted adverse outcomes in patients undergoing surgery in BCP.

Keywords: beach chair position, cerebral oxygenation, cerebral perfusion, sitting position

Procedia PDF Downloads 67
166 Electrospun Fibre Networks Loaded with Hydroxyapatite and Barium Titanate as Smart Scaffolds for Tissue Regeneration

Authors: C. Busuioc, I. Stancu, A. Nicoara, A. Zamfirescu, A. Evanghelidis

Abstract:

The field of tissue engineering has expanded its potential due to the use of composite biomaterials belonging to increasingly complex systems, leading to bone substitutes with properties that are continuously improving to meet the patient's specific needs. Furthermore, the development of biomaterials based on ceramic and polymeric phases is an unlimited resource for future scientific research, with the final aim of restoring the original tissue functionality. Thus, in the first stage, composite scaffolds based on polycaprolactone (PCL) or polylactic acid (PLA) and inorganic powders were prepared by employing the electrospinning technique. The targeted powders were: commercial and laboratory synthesized hydroxyapatite (HAp), as well as barium titanate (BT). By controlling the concentration of the powder within the precursor solution, together with the processing parameters, different types of three-dimensional architectures were achieved. In the second stage, both the mineral powders and hybrid composites were investigated in terms of composition, crystalline structure, and microstructure so that to demonstrate their suitability for tissue engineering applications. Regarding the scaffolds, these were proven to be homogeneous on large areas and loaded with mineral particles in different proportions. The biological assays demonstrated that the addition of inorganic powders leads to modified responses in the presence of simulated body fluid (SBF) or cell cultures. Through SBF immersion, the biodegradability coupled with bioactivity were highlighted, with fiber fragmentation and surface degradation, as well as apatite layer formation within the testing period. Moreover, the final composites represent supports accepted by the cells, favoring implant integration. Concluding, the purposed fibrous materials based on bioresorbable polymers and mineral powders, produced by the electrospinning technique, represent candidates with considerable potential in the field of tissue engineering. Future improvements can be attained by optimizing the synthesis process or by simultaneous incorporation of multiple inorganic phases with well-defined biological action in order to fabricate multifunctional composites.

Keywords: barium titanate, electrospinning, fibre networks, hydroxyapatite, smart scaffolds

Procedia PDF Downloads 88
165 Application of Neutron-Gamma Technologies for Soil Elemental Content Determination and Mapping

Authors: G. Yakubova, A. Kavetskiy, S. A. Prior, H. A. Torbert

Abstract:

In-situ soil carbon determination over large soil surface areas (several hectares) is required in regard to carbon sequestration and carbon credit issues. This capability is important for optimizing modern agricultural practices and enhancing soil science knowledge. Collecting and processing representative field soil cores for traditional laboratory chemical analysis is labor-intensive and time-consuming. The neutron-stimulated gamma analysis method can be used for in-situ measurements of primary elements in agricultural soils (e.g., Si, Al, O, C, Fe, and H). This non-destructive method can assess several elements in large soil volumes with no need for sample preparation. Neutron-gamma soil elemental analysis utilizes gamma rays issued from different neutron-nuclei interactions. This process has become possible due to the availability of commercial portable pulse neutron generators, high-efficiency gamma detectors, reliable electronics, and measurement/data processing software complimented by advances in state-of-the-art nuclear physics methods. In Pulsed Fast Thermal Neutron Analysis (PFTNA), soil irradiation is accomplished using a pulsed neutron flux, and gamma spectra acquisition occurs both during and between pulses. This method allows the inelastic neutron scattering (INS) gamma spectrum to be separated from the thermal neutron capture (TNC) spectrum. Based on PFTNA, a mobile system for field-scale soil elemental determinations (primarily carbon) was developed and constructed. Our scanning methodology acquires data that can be directly used for creating soil elemental distribution maps (based on ArcGIS software) in a reasonable timeframe (~20-30 hectares per working day). Created maps are suitable for both agricultural purposes and carbon sequestration estimates. The measurement system design, spectra acquisition process, strategy for acquiring field-scale carbon content data, and mapping of agricultural fields will be discussed.

Keywords: neutron gamma analysis, soil elemental content, carbon sequestration, carbon credit, soil gamma spectroscopy, portable neutron generators, ArcMap mapping

Procedia PDF Downloads 68
164 Feasibility Study of Particle Image Velocimetry in the Muzzle Flow Fields during the Intermediate Ballistic Phase

Authors: Moumen Abdelhafidh, Stribu Bogdan, Laboureur Delphine, Gallant Johan, Hendrick Patrick

Abstract:

This study is part of an ongoing effort to improve the understanding of phenomena occurring during the intermediate ballistic phase, such as muzzle flows. A thorough comprehension of muzzle flow fields is essential for optimizing muzzle device and projectile design. This flow characterization has heretofore been almost entirely limited to local and intrusive measurement techniques such as pressure measurements using pencil probes. Consequently, the body of quantitative experimental data is limited, so is the number of numerical codes validated in this field. The objective of the work presented here is to demonstrate the applicability of the Particle Image Velocimetry (PIV) technique in the challenging environment of the propellant flow of a .300 blackout weapon to provide accurate velocity measurements. The key points of a successful PIV measurement are the selection of the particle tracer, their seeding technique, and their tracking characteristics. We have experimentally investigated the aforementioned points by evaluating the resistance, gas dispersion, laser light reflection as well as the response to a step change across the Mach disk for five different solid tracers using two seeding methods. To this end, an experimental setup has been performed and consisted of a PIV system, the combustion chamber pressure measurement, classical high-speed schlieren visualization, and an aerosol spectrometer. The latter is used to determine the particle size distribution in the muzzle flow. The experimental results demonstrated the ability of PIV to accurately resolve the salient features of the propellant flow, such as the under the expanded jet and vortex rings, as well as the instantaneous velocity field with maximum centreline velocities of more than 1000 m/s. Besides, naturally present unburned particles in the gas and solid ZrO₂ particles with a nominal size of 100 nm, when coated on the propellant powder, are suitable as tracers. However, the TiO₂ particles intended to act as a tracer, surprisingly not only melted but also functioned as a combustion accelerator and decreased the number of particles in the propellant gas.

Keywords: intermediate ballistic, muzzle flow fields, particle image velocimetry, propellant gas, particle size distribution, under expanded jet, solid particle tracers

Procedia PDF Downloads 137
163 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances

Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst

Abstract:

Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.

Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive

Procedia PDF Downloads 129
162 Magnetoelastically Induced Perpendicular Magnetic Anisotropy and Perpendicular Exchange Bias of CoO/CoPt Multilayer Films

Authors: Guo Lei, Wang Yue, Nakamura Yoshio, Shi Ji

Abstract:

Recently, perpendicular exchange bias (PEB) is introduced as an active topic attracting continuous efforts. Since its discovery, extrinsic control of PEB has been proposed, due to its scientific significance in spintronic devices and potential application in high density magnetic random access memory with perpendicular magnetic tunneling junction (p-MTJ). To our knowledge, the researches aiming to controlling PEB so far are focused mainly on enhancing the interfacial exchange coupling by adjusting the FM/AFM interface roughness, or optimizing the crystalline structures of FM or AFM layer by employing different seed layers. In present work, the effects of magnetoelastically induced PMA on PEB have been explored in [CoO5nm/CoPt5nm]5 multilayer films. We find the PMA strength of FM layer also plays an important role on PEB at the FM/AFM interface and it is effective to control PEB of [CoO5nm/CoPt5nm]5 multilayer films by changing the magnetoelastically induced PMA of CoPt layer. [CoO5nm/CoPt5nm]5 multilayer films were deposited by magnetron sputtering on fused quartz substrate at room temperature, then annealed at 100°C, 250°C, 300°C and 375°C for 3h, respectively. XRD results reveal that all the samples are well crystallized with preferred fcc CoPt (111) orientation. The continuous multilayer structure with sharp component transition at the CoO5nm/CoPt5nm interface are identified clearly by transmission electron microscopy (TEM), x-ray reflectivity (XRR) and atomic force microscope (AFM). CoPt layer in-plane tensile stress is calculated by sin2φ method, and we find it increases gradually upon annealing from 0.99 GPa (as-deposited) up to 3.02 GPa (300oC-annealed). As to the magnetic property, significant enhancement of PMA is achieved in [CoO5nm/CoPt5nm]5 multilayer films after annealing due to the increase of CoPt layer in-plane tensile stress. With the enhancement of magnetoelastically induced PMA, great improvement of PEB is also achieved in [CoO5nm/CoPt5nm]5 multilayer films, which increases from 130 Oe (as-deposited) up to 1060 Oe (300oC-annealed), showing the same change tendency as PMA and the strong correlation with CoPt layer in-plane tensile stress. We consider it is the increase of CoPt layer in-plane tensile stress that leads to the enhancement of PMA, and thus the enhancement of magnetoelastically induced PMA results in the improvement of PEB in [CoO5nm/CoPt5nm]5 multilayer films.

Keywords: perpendicular exchange bias, magnetoelastically induced perpendicular magnetic anisotropy, CoO5nm/CoPt5nm]5 multilayer film with in-plane stress, perpendicular magnetic tunneling junction

Procedia PDF Downloads 435
161 Evaluation of Washing Performance of Household Wastewater Purified by Advanced Oxidation Process

Authors: Nazlı Çetindağ, Pelin Yılmaz Çetiner, Metin Mert İlgün, Emine Birci, Gizemnur Yıldız Uysal, Özcan Hatipoğlu, Ehsan Tuzcuoğlu, Gökhan Sır

Abstract:

Stressing the importance of water conservation, emphasizing the need for efficient management of household water, and underlining the significance of alternative solutions are important. In this context, advanced solutions based on technologies such as the advanced oxidation process have emerged as promising methods for treating household wastewater. Evaluating household water usage holds critical importance for the sustainability of water resources. Researchers and experts are examining various technological approaches to effectively treat and reclaim water for reuse. In this framework, the advanced oxidation process has proven to be an effective method for the removal of various organic and inorganic pollutants in the treatment of household wastewater. In this study, performance will be evaluated by comparing it with the reference case. This international criterion simulates the washing of home textile products, determining various performance parameters. The specially designed stain strips, including sebum, carbon black, blood, cocoa, and red wine, used in experiments, represent various household stains. These stain types were carefully selected to represent challenging stain scenarios, ensuring a realistic assessment of washing performance. Experiments conducted under different temperatures and program conditions successfully demonstrate the practical applicability of the advanced oxidation process for treating household wastewater. It is important to note that both adherence to standards and the use of real-life stain types contribute to the broad applicability of the findings. In conclusion, this study strongly supports the effectiveness of treating household wastewater with the advanced oxidation process in terms of washing performance under both standard and practical application conditions. The study underlines the importance of alternative solutions for sustainable water resource management and highlights the potential of the advanced oxidation process in the treatment of household water, contributing significantly to optimizing water usage and developing sustainable water management solutions.

Keywords: advanced oxidation process, household water usage, household appliance waste water, modelling, water reuse

Procedia PDF Downloads 40
160 Structural Analysis and Modelling in an Evolving Iron Ore Operation

Authors: Sameh Shahin, Nannang Arrys

Abstract:

Optimizing pit slope stability and reducing strip ratio of a mining operation are two key tasks in geotechnical engineering. With a growing demand for minerals and an increasing cost associated with extraction, companies are constantly re-evaluating the viability of mineral deposits and challenging their geological understanding. Within Rio Tinto Iron Ore, the Structural Geology (SG) team investigate and collect critical data, such as point based orientations, mapping and geological inferences from adjacent pits to re-model deposits where previous interpretations have failed to account for structurally controlled slope failures. Utilizing innovative data collection methods and data-driven investigation, SG aims to address the root causes of slope instability. Committing to a resource grid drill campaign as the primary source of data collection will often bias data collection to a specific orientation and significantly reduce the capability to identify and qualify complexity. Consequently, these limitations make it difficult to construct a realistic and coherent structural model that identifies adverse structural domains. Without the consideration of complexity and the capability of capturing these structural domains, mining operations run the risk of inadequately designed slopes that may fail and potentially harm people. Regional structural trends have been considered in conjunction with surface and in-pit mapping data to model multi-batter fold structures that were absent from previous iterations of the structural model. The risk is evident in newly identified dip-slope and rock-mass controlled sectors of the geotechnical design rather than a ubiquitous dip-slope sector across the pit. The reward is two-fold: 1) providing sectors of rock-mass controlled design in previously interpreted structurally controlled domains and 2) the opportunity to optimize the slope angle for mineral recovery and reduced strip ratio. Furthermore, a resulting high confidence model with structures and geometries that can account for historic slope instabilities in structurally controlled domains where design assumptions failed.

Keywords: structural geology, geotechnical design, optimization, slope stability, risk mitigation

Procedia PDF Downloads 13
159 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator

Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya

Abstract:

Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.

Keywords: DTL, focusing, PMQ, proton, rate earth magnets

Procedia PDF Downloads 444
158 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors

Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin

Abstract:

Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.

Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight

Procedia PDF Downloads 138
157 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques

Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng

Abstract:

The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.

Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis

Procedia PDF Downloads 248
156 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 120
155 Analysis and Optimized Design of a Packaged Liquid Chiller

Authors: Saeed Farivar, Mohsen Kahrom

Abstract:

The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.

Keywords: optimization, packaged liquid chiller, performance, simulation

Procedia PDF Downloads 255
154 High Performance Liquid Cooling Garment (LCG) Using ThermoCore

Authors: Venkat Kamavaram, Ravi Pare

Abstract:

Modern warfighters experience extreme environmental conditions in many of their operational and training activities. In temperatures exceeding 95°F, the body’s temperature regulation can no longer cool through convection and radiation. In this case, the only cooling mechanism is evaporation. However, evaporative cooling is often compromised by excessive humidity. Natural cooling mechanisms can be further compromised by clothing and protective gear, which trap hot air and moisture close to the body. Creating an efficient heat extraction apparel system that is also lightweight without hindering dexterity or mobility of personnel working in extreme temperatures is a difficult technical challenge and one that needs to be addressed to increase the probability for the future success of the US military. To address this challenge, Oceanit Laboratories, Inc. has developed and patented a Liquid Cooled Garment (LCG) more effective than any on the market today. Oceanit’s LCG is a form-fitting garment with a network of thermally conductive tubes that extracts body heat and can be worn under all authorized and chemical/biological protective clothing. Oceanit specifically designed and developed ThermoCore®, a thermally conductive polymer, for use in this apparel, optimizing the product for thermal conductivity, mechanical properties, manufacturability, and performance temperatures. Thermal Manikin tests were conducted in accordance with the ASTM test method, ASTM F2371, Standard Test Method for Measuring the Heat Removal Rate of Personal Cooling Systems Using a Sweating Heated Manikin, in an environmental chamber using a 20-zone sweating thermal manikin. Manikin test results have shown that Oceanit’s LCG provides significantly higher heat extraction under the same environmental conditions than the currently fielded Environmental Control Vest (ECV) while at the same time reducing the weight. Oceanit’s LCG vests performed nearly 30% better in extracting body heat while weighing 15% less than the ECV. There are NO cooling garments in the market that provide the same thermal extraction performance, form-factor, and reduced weight as Oceanit’s LCG. The two cooling garments that are commercially available and most commonly used are the Environmental Control Vest (ECV) and the Microclimate Cooling Garment (MCG).

Keywords: thermally conductive composite, tubing, garment design, form fitting vest, thermocore

Procedia PDF Downloads 96
153 Thermo-Hydro-Mechanical-Chemical Coupling in Enhanced Geothermal Systems: Challenges and Opportunities

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Geothermal reservoirs (GTRs) have garnered global recognition as a sustainable energy source. The Thermo-Hydro-Mechanical-Chemical (THMC) integration coupling proves to be a practical and effective method for optimizing production in GTRs. The study outcomes demonstrate that THMC coupling serves as a versatile and valuable tool, offering in-depth insights into GTRs and enhancing their operational efficiency. This is achieved through temperature analysis and pressure changes and their impacts on mechanical properties, structural integrity, fracture aperture, permeability, and heat extraction efficiency. Moreover, THMC coupling facilitates potential benefits assessment and risks associated with different geothermal technologies, considering the complex thermal, hydraulic, mechanical, and chemical interactions within the reservoirs. However, THMC-coupling utilization in GTRs presents a multitude of challenges. These challenges include accurately modeling and predicting behavior due to the interconnected nature of processes, limited data availability leading to uncertainties, induced seismic events risks to nearby communities, scaling and mineral deposition reducing operational efficiency, and reservoirs' long-term sustainability. In addition, material degradation, environmental impacts, technical challenges in monitoring and control, accurate assessment of resource potential, and regulatory and social acceptance further complicate geothermal projects. Addressing these multifaceted challenges is crucial for successful geothermal energy resources sustainable utilization. This paper aims to illuminate the challenges and opportunities associated with THMC coupling in enhanced geothermal systems. Practical solutions and strategies for mitigating these challenges are discussed, emphasizing the need for interdisciplinary approaches, improved data collection and modeling techniques, and advanced monitoring and control systems. Overcoming these challenges is imperative for unlocking the full potential of geothermal energy making a substantial contribution to the global energy transition and sustainable development.

Keywords: geothermal reservoirs, THMC coupling, interdisciplinary approaches, challenges and opportunities, sustainable utilization

Procedia PDF Downloads 38
152 Tunable Crystallinity of Zinc Gallogermanate Nanoparticles via Organic Ligand-Assisted Biphasic Hydrothermal Synthesis

Authors: Sarai Guerrero, Lijia Liu

Abstract:

Zinc gallogermanate (ZGGO) is a persistent phosphor that can emit in the near infrared (NIR) range once dopped with Cr³⁺ enabling its use for in-vivo deep-tissue bio-imaging. Such a property also allows for its application in cancer diagnosis and therapy. Given this, work into developing a synthetic procedure that can be done using common laboratory instruments and equipment as well as understanding ZGGO overall, is in demand. However, the ZGGO nanoparticles must have a size compatible for cell intake to occur while still maintaining sufficient photoluminescence. The nanoparticle must also be made biocompatible by functionalizing the surface for hydrophilic solubility and for high particle uniformity in the final product. Additionally, most research is completed on doped ZGGO, leaving a gap in understanding the base form of ZGGO. It also leaves a gap in understanding how doping affects the synthesis of ZGGO. In this work, the first step of optimizing the particle size via the crystalline size of ZGGO was done with undoped ZGGO using the organic acid, oleic acid (OA) for organic ligand-assisted biphasic hydrothermal synthesis. The effects of this synthesis procedure on ZGGO’s crystallinity were evaluated using Powder X-Ray Diffraction (PXRD). OA was selected as the capping ligand as experiments have shown it beneficial in synthesizing sub-10 nm zinc gallate (ZGO) nanoparticles as well as palladium nanocrystals and magnetite (Fe₃O₄) nanoparticles. Later it is possible to substitute OA with a different ligand allowing for hydrophilic solubility. Attenuated Total Reflection Fourier-Transform Infrared (ATR-FTIR) was used to investigate the surface of the nanoparticle to investigate and verify that OA had capped the nanoparticle. PXRD results showed that using this procedure led to improved crystallinity, comparable to the high-purity reagents used on the ZGGO nanoparticles. There was also a change in the crystalline size of the ZGGO nanoparticles. ATR-FTIR showed that once capped ZGGO cannot be annealed as doing so will affect the OA. These results point to this new procedure positively affecting the crystallinity of ZGGO nanoparticles. There are also repeatable implying the procedure is a reliable source of highly crystalline ZGGO nanoparticles. With this completed, the next step will be working on substituting the OA with a hydrophilic ligand. As these ligands effect the solubility of the nanoparticle as well as the pH that the nanoparticles can dissolve in, further research is needed to verify which ligand is best suited for preparing ZGGO for bio-imaging.

Keywords: biphasic hydrothermal synthesis, crystallinity, oleic acid, zinc gallogermanate

Procedia PDF Downloads 103
151 Predicting Daily Patient Hospital Visits Using Machine Learning

Authors: Shreya Goyal

Abstract:

The study aims to build user-friendly software to understand patient arrival patterns and compute the number of potential patients who will visit a particular health facility for a given period by using a machine learning algorithm. The underlying machine learning algorithm used in this study is the Support Vector Machine (SVM). Accurate prediction of patient arrival allows hospitals to operate more effectively, providing timely and efficient care while optimizing resources and improving patient experience. It allows for better allocation of staff, equipment, and other resources. If there's a projected surge in patients, additional staff or resources can be allocated to handle the influx, preventing bottlenecks or delays in care. Understanding patient arrival patterns can also help streamline processes to minimize waiting times for patients and ensure timely access to care for patients in need. Another big advantage of using this software is adhering to strict data protection regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States as the hospital will not have to share the data with any third party or upload it to the cloud because the software can read data locally from the machine. The data needs to be arranged in. a particular format and the software will be able to read the data and provide meaningful output. Using software that operates locally can facilitate compliance with these regulations by minimizing data exposure. Keeping patient data within the hospital's local systems reduces the risk of unauthorized access or breaches associated with transmitting data over networks or storing it in external servers. This can help maintain the confidentiality and integrity of sensitive patient information. Historical patient data is used in this study. The input variables used to train the model include patient age, time of day, day of the week, seasonal variations, and local events. The algorithm uses a Supervised learning method to optimize the objective function and find the global minima. The algorithm stores the values of the local minima after each iteration and at the end compares all the local minima to find the global minima. The strength of this study is the transfer function used to calculate the number of patients. The model has an output accuracy of >95%. The method proposed in this study could be used for better management planning of personnel and medical resources.

Keywords: machine learning, SVM, HIPAA, data

Procedia PDF Downloads 47
150 Shaping Students’ Futures: Evaluating Professors’ Effectiveness as Academic Advisors in Postsecondary Institutions

Authors: Mohamad Musa, Khaldoun Aldiabat, Chelsea McLellan

Abstract:

In higher education, academic advising and counseling are pivotal for guiding students towards successful academic and professional trajectories. Within this landscape, professors play a critical role as academic advisors, offering guidance and support to students navigating their educational journey. This study endeavors to delve into the effectiveness of professors in this capacity through a comprehensive quantitative survey. Amidst the research objectives lies a profound exploration of students' perceptions regarding professors' effectiveness as academic advisors. Additionally, the study aims to elucidate the nuanced strengths and limitations inherent in professors' advisory roles. Through meticulous examination, the research seeks to uncover the profound impact of professors' engagement on student academic accomplishments and contentment. Moreover, it will scrutinize the requisite qualifications, training, and support mechanisms necessary for professors to excel in advisory roles. Utilizing a quantitative survey methodology, this research will gather invaluable insights into students' perspectives on professors' advisory competencies. Rigorous statistical analysis of survey responses will illuminate the efficacy of professors as academic advisors. The survey instrument will intricately measure diverse dimensions such as students' satisfaction levels with advisory sessions, the perceived efficacy of advice rendered by professors, and the holistic influence of professors' involvement on academic triumphs. Anticipated outcomes encompass a meticulous quantitative evaluation of professors' efficacy in academic advisory roles. Moreover, the research endeavors to delineate areas of proficiency and areas necessitating refinement within professors' advisory practices. Through these efforts, the study aims to provide valuable insights that can inform strategies for enhancing professors' advisory practices and optimizing the support systems available to students in higher education institutions. The study seeks to go beyond surface-level evaluations by delving into the intricate relationship between professors' involvement in academic advising and student academic outcomes. By unraveling this complex interplay, the research endeavors to shed light on the mechanisms through which professors' guidance impacts students' academic success, satisfaction, and overall educational experience.

Keywords: academic advising, professors, effectiveness, quantitative survey, student outcomes

Procedia PDF Downloads 13
149 A Decision-Support Tool for Humanitarian Distribution Planners in the Face of Congestion at Security Checkpoints: A Real-World Case Study

Authors: Mohanad Rezeq, Tarik Aouam, Frederik Gailly

Abstract:

In times of armed conflicts, various security checkpoints are placed by authorities to control the flow of merchandise into and within areas of conflict. The flow of humanitarian trucks that is added to the regular flow of commercial trucks, together with the complex security procedures, creates congestion and long waiting times at the security checkpoints. This causes distribution costs to increase and shortages of relief aid to the affected people to occur. Our research proposes a decision-support tool to assist planners and policymakers in building efficient plans for the distribution of relief aid, taking into account congestion at security checkpoints. The proposed tool is built around a multi-item humanitarian distribution planning model based on multi-phase design science methodology that has as its objective to minimize distribution and back ordering costs subject to capacity constraints that reflect congestion effects using nonlinear clearing functions. Using the 2014 Gaza War as a case study, we illustrate the application of the proposed tool, model the underlying relief-aid humanitarian supply chain, estimate clearing functions at different security checkpoints, and conduct computational experiments. The decision support tool generated a shipment plan that was compared to two benchmarks in terms of total distribution cost, average lead time and work in progress (WIP) at security checkpoints, and average inventory and backorders at distribution centers. The first benchmark is the shipment plan generated by the fixed capacity model, and the second is the actual shipment plan implemented by the planners during the armed conflict. According to our findings, modeling and optimizing supply chain flows reduce total distribution costs, average truck wait times at security checkpoints, and average backorders when compared to the executed plan and the fixed-capacity model. Finally, scenario analysis concludes that increasing capacity at security checkpoints can lower total operations costs by reducing the average lead time.

Keywords: humanitarian distribution planning, relief-aid distribution, congestion, clearing functions

Procedia PDF Downloads 60
148 Collaborative Data Refinement for Enhanced Ionic Conductivity Prediction in Garnet-Type Materials

Authors: Zakaria Kharbouch, Mustapha Bouchaara, F. Elkouihen, A. Habbal, A. Ratnani, A. Faik

Abstract:

Solid-state lithium-ion batteries have garnered increasing interest in modern energy research due to their potential for safer, more efficient, and sustainable energy storage systems. Among the critical components of these batteries, the electrolyte plays a pivotal role, with LLZO garnet-based electrolytes showing significant promise. Garnet materials offer intrinsic advantages such as high Li-ion conductivity, wide electrochemical stability, and excellent compatibility with lithium metal anodes. However, optimizing ionic conductivity in garnet structures poses a complex challenge, primarily due to the multitude of potential dopants that can be incorporated into the LLZO crystal lattice. The complexity of material design, influenced by numerous dopant options, requires a systematic method to find the most effective combinations. This study highlights the utility of machine learning (ML) techniques in the materials discovery process to navigate the complex range of factors in garnet-based electrolytes. Collaborators from the materials science and ML fields worked with a comprehensive dataset previously employed in a similar study and collected from various literature sources. This dataset served as the foundation for an extensive data refinement phase, where meticulous error identification, correction, outlier removal, and garnet-specific feature engineering were conducted. This rigorous process substantially improved the dataset's quality, ensuring it accurately captured the underlying physical and chemical principles governing garnet ionic conductivity. The data refinement effort resulted in a significant improvement in the predictive performance of the machine learning model. Originally starting at an accuracy of 0.32, the model underwent substantial refinement, ultimately achieving an accuracy of 0.88. This enhancement highlights the effectiveness of the interdisciplinary approach and underscores the substantial potential of machine learning techniques in materials science research.

Keywords: lithium batteries, all-solid-state batteries, machine learning, solid state electrolytes

Procedia PDF Downloads 31
147 Optimizing the Field Emission Performance of SiNWs-Based Heterostructures: Controllable Synthesis, Core-Shell Structure, 3D ZnO/Si Nanotrees and Graphene/SiNWs

Authors: Shasha Lv, Zhengcao Li

Abstract:

Due to the CMOS compatibility, silicon-based field emission (FE) devices as potential electron sources have attracted much attention. The geometrical arrangement and dimensional features of aligned silicon nanowires (SiNWs) have a determining influence on the FE properties. We discuss a multistep template replication process of Ag-assisted chemical etching combined with polystyrene (PS) spheres to fabricate highly periodic and well-aligned silicon nanowires, then their diameter, aspect ratio and density were further controlled via dry oxidation and post chemical treatment. The FE properties related to proximity and aspect ratio were systematically studied. A remarkable improvement of FE propertiy was observed with the average nanowires tip interspace increasing from 80 to 820 nm. On the basis of adjusting SiNWs dimensions and morphology, addition of a secondary material whose properties complement the SiNWs could yield a combined characteristic. Three different nanoheterostructures were fabricated to control the FE performance, they are: NiSi/Si core-shell structures, ZnO/Si nanotrees, and Graphene/SiNWs. We successfully fabricated the high-quality NiSi/Si heterostructured nanowires with excellent conformality. First, nickle nanoparticles were deposited onto SiNWs, then rapid thermal annealing process were utilized to form NiSi shell. In addition, we demonstrate a new and simple method for creating 3D nanotree-like ZnO/Si nanocomposites with a spatially branched hierarchical structure. Compared with the as-prepared SiNRs and ZnO NWs, the high-density ZnO NWs on SiNRs have exhibited predominant FE characteristics, and the FE enhancement factors were attributed to band bending effect and geometrical morphology. The FE efficiency from flat sheet structure of graphene is low. We discussed an effective approach towards full control over the diameter of uniform SiNWs to adjust the protrusions of large-scale graphene sheet deposited on SiNWs. The FE performance regarding the uniformity and dimensional control of graphene protrusions supported on SiNWs was systematically clarified. Therefore, the hybrid SiNWs/graphene structures with protrusions provide a promising class of field emission cathodes.

Keywords: field emission, silicon nanowires, heterostructures, controllable synthesis

Procedia PDF Downloads 252
146 Impact of Interventions on Brain Functional Connectivity in Young Male Basketball Players: A Comparative Study

Authors: Mohammad Khazaei, Reza Rostami, Hassan Gharayagh Zandi, Ruhollah Basatnia, Mahboubeh Ghayour Najafabadi

Abstract:

Introduction: This study delves into the influence of diverse interventions on brain functional connectivity among young male basketball players. Given the significance of understanding how interventions affect cognitive functions in athletes, particularly in the context of basketball, this research contributes to the growing body of knowledge in sports neuroscience. Methods: Three distinct groups were selected for comprehensive investigation: the Motivational Interview Group, Placebo Consumption Group, and Ritalin Consumption Group. The study involved assessing brain functional connectivity using various frequency bands (Delta, Theta, Alpha, Beta1, Beta2, Gamma, and Total Band) before and after the interventions. The participants were subjected to specific interventions corresponding to their assigned groups. Results: The findings revealed substantial differences in brain functional connectivity across the studied groups. The Motivational Interview Group exhibited optimal outcomes in PLI (Total Band) connectivity. The Placebo Consumption Group demonstrated a marked impact on PLV (Alpha) connectivity, and the Ritalin Consumption Group experienced a considerable enhancement in imCoh (Total Band) connectivity. Discussion: The observed variations in brain functional connectivity underscore the nuanced effects of different interventions on young male basketball players. The enhanced connectivity in specific frequency bands suggests potential cognitive and performance improvements. Notably, the Motivational Interview and Placebo Consumption groups displayed unique patterns, emphasizing the multifaceted nature of interventions. These findings contribute to the understanding of tailored interventions for optimizing cognitive functions in young male basketball players. Conclusion: This study provides valuable insights into the intricate relationship between interventions and brain functional connectivity in young male basketball players. Further research with expanded sample sizes and more sophisticated statistical analyses is recommended to corroborate and expand upon these initial findings. The implications of this study extend to the broader field of sports neuroscience, aiding in the development of targeted interventions for athletes in various disciplines.

Keywords: electroencephalography, Ritalin, Placebo effect, motivational interview

Procedia PDF Downloads 34
145 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills

Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu

Abstract:

Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.

Keywords: acetabular fractures, articular congruency, surgical skills, vocational training

Procedia PDF Downloads 188
144 Nutrition Program Planning Based on Local Resources in Urban Fringe Areas of a Developing Country

Authors: Oktia Woro Kasmini Handayani, Bambang Budi Raharjo, Efa Nugroho, Bertakalswa Hermawati

Abstract:

Obesity prevalence and severe malnutrition in Indonesia has increased from 2007 to 2013. The utilization of local resources in nutritional program planning can be used to program efficiency and to reach the goal. The aim of this research is to plan a nutrition program based on local resources for urban fringe areas in a developing country. This research used a qualitative approach, with a focus on local resources including social capital, social system, cultural system. The study was conducted in Mijen, Central Java, as one of the urban fringe areas in Indonesia. Purposive and snowball sampling techniques are used to determine participants. A total of 16 participants took part in the study. Observation, interviews, focus group discussion, SWOT analysis, brainstorming and Miles and Huberman models were used to analyze the data. We have identified several local resources, such as the contributions from nutrition cadres, social organizations, social financial resources, as well as the cultural system and social system. The outstanding contribution of nutrition cadres is the participation and creativity to improve nutritional status. In addition, social organizations, like the role of the integrated health center for children (Pos Pelayanan Terpadu), can be engaged in the nutrition program planning. This center is supported by House of Nutrition to assist in nutrition program planning, and provide social support to families, neighbors and communities as social capitals. The study also reported that cultural systems that show appreciation for well-nourished children are a better way to improve the problem of balanced nutrition. Social systems such as teamwork and mutual cooperation can also be a potential resource to support nutritional programs and overcome associated problems. The impact of development in urban areas such as the introduction of more green areas which improve the perceived status of local people, as well as new health services facilitated by people and companies, can also be resources to support nutrition programs. Local resources in urban fringe areas can be used in the planning of nutrition programs. The expansion of partnership with all stakeholders, empowering the community through optimizing the roles of nutrition care centers for children as our recommendation with regard to nutrition program planning.

Keywords: developing country, local resources, nutrition program, urban fringe

Procedia PDF Downloads 230