Search results for: single event upset
723 Economic Evaluation of Degradation by Corrosion of an On-Grid Battery Energy Storage System: A Case Study in Algeria Territory
Authors: Fouzia Brihmat
Abstract:
Economic planning models, which are used to build microgrids and distributed energy resources, are the current norm for expressing such confidence (DER). These models often decide both short-term DER dispatch and long-term DER investments. This research investigates the most cost-effective hybrid (photovoltaic-diesel) renewable energy system (HRES) based on Total Net Present Cost (TNPC) in an Algerian Saharan area, which has a high potential for solar irradiation and has a production capacity of 1GW/h. Lead-acid batteries have been around much longer and are easier to understand, but have limited storage capacity. Lithium-ion batteries last longer, are lighter, but generally more expensive. By combining the advantages of each chemistry, we produce cost-effective high-capacity battery banks that operate solely on AC coupling. The financial implications of this research describe the corrosion process that occurs at the interface between the active material and grid material of the positive plate of a lead-acid battery. The best cost study for the HRES is completed with the assistance of the HOMER Pro MATLAB Link. Additionally, during the course of the project's 20 years, the system is simulated for each time step. In this model, which takes into consideration decline in solar efficiency, changes in battery storage levels over time, and rises in fuel prices above the rate of inflation. The trade-off is that the model is more accurate, but it took longer to compute. As a consequence, the model is more precise, but the computation takes longer. We initially utilized the Optimizer to run the model without MultiYear in order to discover the best system architecture. The optimal system for the single-year scenario is the Danvest generator, which has 760 kW, 200 kWh of the necessary quantity of lead-acid storage, and a somewhat lower COE of $0.309/kWh. Different scenarios that account for fluctuations in the gasified biomass generator's production of electricity have been simulated, and various strategies to guarantee the balance between generation and consumption have been investigated. The technological optimization of the same system has been finished and is being reviewed in a recent paper study.Keywords: battery, corrosion, diesel, economic planning optimization, hybrid energy system, lead-acid battery, multi-year planning, microgrid, price forecast, PV, total net present cost
Procedia PDF Downloads 88722 Prevalence of Rituximab Efficacy Over Immunosuppressants in Therapy of Systemic Sclerosis
Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva
Abstract:
Abstract Objectives. Rituximab (RTX) shown a positive effect in the treatment of systemic sclerosis (SSc). But there is still not enough data on comparing the effectiveness of RTX with immunosuppressants (IS). The aim of our study was to compare changes of lung function and skin score in SSc between two groups of patients (pts) - on RXT therapy (prescribed after ineffectiveness of previous therapy with IS) and on therapy with IS only. Methods. This study included 103 pts received RTX as an addition to previous therapy (group 1) and 65 pts received therapy with IS and prednisolone (group 2). The mean follow-up period was 12.6±10.7months. In group 1 the mean age was 47±12.9 years, female – 88 pts (84%), the diffuse cutaneous subset of the disease had 55 pts (53%). The mean disease duration was 6.2±5.5 years. 82% pts had interstitial lung disease (ILD) and 92% were positive for ANA, 67% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 11.3±4.5 mg/day, IS at inclusion received 47% of them. The cumulative mean dose of RTX was 1.7±0.6 g. In group 2 the mean age was 50.8±13.8 years, female-53 pts (82%), the diffuse cutaneous subset of the disease had 44 pts (68%). The mean disease duration was 8.8±7.7 years. 81% pts had ILD and 88% were positive for ANA, 58% of them were positive for antitopoisomerase-1. All pts received prednisolone at a dose of 8.69±4.28 mg/day, IS received 57% of them. Cyclophosphamide (CP) received 45% of pts. The cumulative mean dose of CP was 10.2±15.1g. D-penicillamine received 30% of pts. Other pts was on mycophenolate mofetil or methotrexate therapy in single cases. The pts of the compared groups did not differ in the main demographic and clinical parameters. The results are presented as delta (Δ) - difference between the baseline parameter and follow up point. Results. In group 1 there was an improvement of all outcome parameters: increased of forced vital capacity, % predicted - ΔFVC=4% (p=0.0004); Diffusing capacity for carbon monoxide, % predicted remained stable (ΔDLCO=0.1%); improvement of the Rodnan skin score-ΔmRss=3.4 (p=0.001); decrease of Activity index (EScSG-AI) - ΔActivity index=1.7 (p=0.001). In group 2 the changes was insignificant: ΔFVC=-2.3%, ΔmRss=0.87, ΔActivity index=0.3. But there was a significant decrease of DLCO: ΔDLCO=-5.1% (p=0.001). Conclusion. The results of our study confirm the data on the positive effect of RTX in complex therapy in pts with SSc (decrease of skin induration, increase of FVC, stabilization of DLCO). Meantime, pts on IS and prednisolone therapy shown the worsening of lung function and insignificant changes of other clinical parameters. RTX could be considered as a more effective option in complex treatment of SSc in comparison with IS therapyKeywords: immunosuppressants, interstitial lung disease, systemic sclerosis, rituximab
Procedia PDF Downloads 83721 Assessment of the Change in Strength Properties of Biocomposites Based on PLA and PHA after 4 Years of Storage in a Highly Cooled Condition
Authors: Karolina Mazur, Stanislaw Kuciel
Abstract:
Polylactides (PLA) and polyhydroxyalkanoates (PHA) are the two groups of biodegradable and biocompatible thermoplastic polymers most commonly utilised in medicine and rehabilitation. The aim of this work is to determine the changes in the strength properties and the microstructures taking place in biodegradable polymer composites during their long-term storage in a highly cooled environment (i.e. a freezer at -24ºC) and to initially assess the durability of such biocomposites when used as single-use elements of rehabilitation or medical equipment. It is difficult to find any information relating to the feasibility of long-term storage of technical products made of PLA or PHA, but nonetheless, when using these materials to make products such as casings of hair dryers, laptops or mobile phones, it is safe to assume that without storing in optimal conditions their degradation time might last even several years. SEM images and the assessment of the strength properties (tensile, bending and impact testing) were carried out and the density and water sorption of two polymers, PLA and PHA (NaturePlast PLE 001 and PHE 001), filled with cellulose fibres (corncob grain – Rehofix MK100, Rettenmaier&Sohne) up to 10 and 20% mass were determined. The biocomposites had been stored at a temperature of -24ºC for 4 years. In order to find out the changes in the strength properties and the microstructure taking place after such a long time of storage, the results of the assessment have been compared with the results of the same research carried out 4 years before. Results shows a significant change in the manner of fractures – from ductile with developed surface for the PHA composite with corncob grain when the tensile testing was performed directly after the injection into a more brittle state after 4 years of storage, which is confirmed by the strength tests, where a decrease of deformation is observed at point of fracture. The research showed that there is a way of storing medical devices made out of PLA or PHA for a reasonably long time, as long as the required temperature of storage is met. The decrease of mechanical properties found during tensile testing and bending for PLA was less than 10% of the tensile strength, while the modulus of elasticity and deformation at fracturing slightly rose, which may implicate the beginning of degradation processes. The strength properties of PHA are even higher after 4 years of storage, although in that case the decrease of deformation at fracturing is significant, reaching even 40%, which suggests its degradation rate is higher than that of PLA. The addition of natural particles in both cases only slightly increases the biodegradation.Keywords: biocomposites, PLA, PHA, storage
Procedia PDF Downloads 265720 Mini-Open Repair Using Ring Forceps Show Similar Results to Repair Using Achillon Device in Acute Achilles Tendon Rupture
Authors: Chul Hyun Park
Abstract:
Background:Repair using the Achillon deviceis a representative mini-open repair technique;however, the limitations of this technique includethe need for special instruments and decreasedrepair strength.A modifiedmini-open repair using ring forcepsmight overcome these limitations. Purpose:This study was performed to compare the Achillon device with ring forceps in mini-open repairsof acute Achilles tendon rupture. Study Design:This was a retrospective cohort study, and the level of evidence was3. Methods:Fifty patients (41 men and 9 women), withacute Achilles tendon rupture on one foot, were consecutively treated using mini-open repair techniques. The first 20 patients were treated using the Achillon device (Achillon group) and the subsequent 30 patients were treated using a ring forceps (Forcep group). Clinical, functional, and isokinetic results,and postoperative complications were compared between the two groups at the last follow-up. Clinical evaluations wereperformed using the American Orthopedic Foot and Ankle Society (AOFAS) score, Achilles tendon Total Rupture Score (ATRS), length of incision, and operation time. Functional evaluationsincludedactive range of motion (ROM) of the ankle joint, maximum calf circumference (MCC), hopping test, and single limb heel-rise (SLHR) test. Isokinetic evaluations were performed using the isokinetic test for ankle plantar flexion. Results:The AOFAS score (p=0.669), ATRS (p=0.753), and length of incision (p=0.305) were not significantly different between the groups. Operative times in the Achillon group were significantly shorter than that in the Forcep group (p<0.001).The maximum height of SLHR (p=0.023) and number of SLHRs (p=0.045) in the Forcep group were significantly greater than that in the Achillon group. No significant differences in the mean peak torques for plantar flexion at angular speeds of 30°/s (p=0.219) and 120°/s (p=0.656) were detected between the groups. There was no significant difference in the occurrence of postoperative complications between the groups (p=0.093). Conclusion:The ring forceps technique is comparable with the Achillon technique with respect to clinical, functional, and isokinetic results and the postoperative complications. Given that no special instrument is required, the ring forceps technique could be a better option for acute Achilles tendon rupture repair.Keywords: achilles tendon, acute rupture, repair, mini-open
Procedia PDF Downloads 81719 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 54718 Flexible, Hydrophobic and Mechanical Strong Poly(Vinylidene Fluoride): Carbon Nanotube Composite Films for Strain-Sensing Applications
Authors: Sudheer Kumar Gundati, Umasankar Patro
Abstract:
Carbon nanotube (CNT) – polymer composites have been extensively studied due to their exceptional electrical and mechanical properties. In the present study, poly(vinylidene fluoride) (PVDF) – multi-walled CNT composites were prepared by melt-blending technique using pristine (ufCNT) and a modified dilute nitric acid-treated CNTs (fCNT). Due to this dilute acid-treatment, the fCNTs were found to show significantly improved dispersion and retained their electrical property. The fCNT showed an electrical percolation threshold (PT) of 0.15 wt% in the PVDF matrix as against 0.35 wt% for ufCNT. The composites were made into films of thickness ~0.3 mm by compression-molding and the resulting composite films were subjected to various property evaluations. It was found that the water contact angle (WCA) of the films increased with CNT weight content in composites and the composite film surface became hydrophobic (e.g., WCA ~104° for 4 wt% ufCNT and 111.5° for 0.5 wt% fCNT composites) in nature; while the neat PVDF film showed hydrophilic behavior (WCA ~68°). Significant enhancements in the mechanical properties were observed upon CNT incorporation and there is a progressive increase in the tensile strength and modulus with increase in CNT weight fraction in composites. The composite films were tested for strain-sensing applications. For this, a simple and non-destructive method was developed to demonstrate the strain-sensing properties of the composites films. In this method, the change in electrical resistance was measured using a digital multimeter by applying bending strain by oscillation. It was found that by applying dynamic bending strain, there is a systematic change in resistance and the films showed piezo-resistive behavior. Due to the high flexibility of these composite films, the change in resistance was reversible and found to be marginally affected, when large number of tests were performed using a single specimen. It is interesting to note that the composites with CNT content notwithstanding their type near the percolation threshold (PT) showed better strain-sensing properties as compared to the composites with CNT contents well-above the PT. On account of the excellent combination of the various properties, the composite films offer a great promise as strain-sensors for structural health-monitoring.Keywords: carbon nanotubes, electrical percolation threshold, mechanical properties, poly(vinylidene fluoride), strain-sensor, water contact angle
Procedia PDF Downloads 246717 Organizational Stress in Women Executives
Authors: Poornima Gupta, Sadaf Siraj
Abstract:
The study examined the organizational causes of organizational stress in women executives and entrepreneurs in India. This was done so that mediation strategies could be developed to combat the organizational stress experienced by them, in order to retain the female employees as well as attract quality talent. The data for this research was collected through the self- administered survey, from the women executives across various industries working at different levels in management. The research design of the study was descriptive and cross-sectional. It was carried out through a self-administered questionnaire filled in by the women executives and entrepreneurs in the NCR region. Multistage sampling involving stratified random sampling was employed. A total of 1000 questionnaires were distributed out of which 450 were returned and after cleaning the data 404 were fit to be considered for analyses. The overall findings of the study suggested that there were various job-related factors that induce stress. Fourteen factors were identified which were a major cause of stress among the working women by applying Factor analysis. The study also assessed the demographic factors which influence the stress in women executives across various industries. The findings show that the women, no doubt, were stressed by organizational factors. The mean stress score was 153 (out of a possible score of 196) indicating high stress. There appeared to be an inverse relationship between the marital status, age, education, work experience, and stress. Married women were less stressed compared to single women employees. Similarly, female employees 29 years or younger experienced more stress at work. Women having education up to 12th standard or less were more stressed compared to graduates and post graduates. Women who had spent more than two years in the same organization perceived more stress compared to their counterparts. Family size and income, interestingly, had no significant impact on stress. The study also established that the level of stress experienced by women across industries differs considerably. Banking sector emerged as the industry where the women experienced the most stress followed by Entrepreneurs, Medical, BPO, Advertising, Government, Academics, and Manufacturing, in that order. The results contribute to the better understanding of the personal and economic factors surrounding job stress and working women. It concludes that the organizations need to be sensitive to the women’s needs. Organizations are traditionally designed around men with the rules made by the men for the men. Involvement of women in top positions, decision making, would make them feel more useful and less stressed. The invisible glass ceiling causes more stress than realized among women. Less distinction between the men and women colleagues in terms of giving responsibilities, involvement in decision making, framing policies, etc. would go a long way to reduce stress in women.Keywords: women, stress, gender in management, women in management
Procedia PDF Downloads 257716 Understanding the Role of Concussions as a Risk Factor for Multiple Sclerosis
Authors: Alvin Han, Reema Shafi, Alishba Afaq, Jennifer Gommerman, Valeria Ramaglia, Shannon E. Dunn
Abstract:
Adolescents engaged in contact-sports can suffer from recurrent brain concussions with no loss of consciousness and no need for hospitalization, yet they face the possibility of long-term neurocognitive problems. Recent studies suggest that head concussive injuries during adolescence can also predispose individuals to multiple sclerosis (MS). The underlying mechanisms of how brain concussions predispose to MS is not understood. Here, we hypothesize that: (1) recurrent brain concussions prime microglial cells, the tissue resident myeloid cells of the brain, setting them up for exacerbated responses when exposed to additional challenges later in life; and (2) brain concussions lead to the sensitization of myelin-specific T cells in the peripheral lymphoid organs. Towards addressing these hypotheses, we implemented a mouse model of closed head injury that uses a weight-drop device. First, we calibrated the model in male 12 week-old mice and established that a weight drop from a 3 cm height induced mild neurological symptoms (mean neurological score of 1.6+0.4 at 1 hour post-injury) from which the mice fully recovered by 72 hours post-trauma. Then, we performed immunohistochemistry on the brain of concussed mice at 72 hours post-trauma. Despite mice having recovered from all neurological symptoms, immunostaining for leukocytes (CD45) and IBA-1 revealed no peripheral immune infiltration, but an increase in the intensity of IBA1+ staining compared to uninjured controls, suggesting that resident microglia had acquired a more active phenotype. This microglia activation was most apparent in the white matter tracts in the brain and in the olfactory bulb. Immunostaining for the microglia-specific homeostatic marker TMEM119, showed a reduction in TMEM119+ area in the brain of concussed mice compared to uninjured controls, confirming a loss of this homeostatic signal by microglia after injury. Future studies will test whether single or repetitive concussive injury can worsen or accelerate autoimmunity in male and female mice. Understanding these mechanisms will guide the development of timed and targeted therapies to prevent MS from getting started in people at risk.Keywords: concussion, microglia, microglial priming, multiple sclerosis
Procedia PDF Downloads 102715 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 344714 An Integrated Power Generation System Design Developed between Solar Energy-Assisted Dual Absorption Cycles
Authors: Asli Tiktas, Huseyin Gunerhan, Arif Hepbasli
Abstract:
Solar energy, with its abundant and clean features, is one of the prominent renewable energy sources in multigeneration energy systems where various outputs, especially power generation, are produced together. In the literature, concentrated solar energy systems, which are an expensive technology, are mostly used in solar power plants where medium-high capacity production outputs are achieved. In addition, although different methods have been developed and proposed for solar energy-supported integrated power generation systems by different investigators, absorption technology, which is one of the key points of the present study, has been used extensively in cooling systems in these studies. Unlike these common uses mentioned in the literature, this study designs a system in which a flat plate solar collector (FPSC), Rankine cycle, absorption heat transformer (AHT), and cooling systems (ACS) are integrated. The system proposed within the scope of this study aims to produce medium-high-capacity electricity, heating, and cooling outputs using a technique different from the literature, with lower production costs than existing systems. With the proposed integrated system design, the average production costs based on electricity, heating, and cooling load production for similar scale systems are 5-10% of the average production costs of 0.685 USD/kWh, 0.247 USD/kWh, and 0.342 USD/kWh. In the proposed integrated system design, this will be achieved by increasing the outlet temperature of the AHT and FPSC system first, expanding the high-temperature steam coming out of the absorber of the AHT system in the turbine up to the condenser temperature of the ACS system, and next directly integrating it into the evaporator of this system and then completing the AHT cycle. Through this proposed system, heating and cooling will be carried out by completing the AHT and ACS cycles, respectively, while power generation will be provided because of the expansion of the turbine. Using only a single generator in the production of these three outputs together, the costs of additional boilers and the need for a heat source are also saved. In order to demonstrate that the system proposed in this study offers a more optimum solution, the techno-economic parameters obtained based on energy, exergy, economic, and environmental analysis were compared with the parameters of similar scale systems in the literature. The design parameters of the proposed system were determined through a parametric optimization study to exceed the maximum efficiency and effectiveness and reduce the production cost rate values of the compared systems.Keywords: solar energy, absorption technology, Rankine cycle, multigeneration energy system
Procedia PDF Downloads 58713 Study of the Hydrodynamic of Electrochemical Ion Pumping for Lithium Recovery
Authors: Maria Sofia Palagonia, Doriano Brogioli, Fabio La Mantia
Abstract:
In the last decade, lithium has become an important raw material in various sectors, in particular for rechargeable batteries. Its production is expected to grow more and more in the future, especially for mobile energy storage and electromobility. Until now it is mostly produced by the evaporation of water from salt lakes, which led to a huge water consumption, a large amount of waste produced and a strong environmental impact. A new, clean and faster electrochemical technique to recover lithium has been recently proposed: electrochemical ion pumping. It consists in capturing lithium ions from a feed solution by intercalation in a lithium-selective material, followed by releasing them into a recovery solution; both steps are driven by the passage of a current. In this work, a new configuration of the electrochemical cell is presented, used to study and optimize the process of the intercalation of lithium ions through the hydrodynamic condition. Lithium Manganese Oxide (LiMn₂O₄) was used as a cathode to intercalate lithium ions selectively during the reduction, while Nickel Hexacyano Ferrate (NiHCF), used as an anode, releases positive ion. The effect of hydrodynamics on the process has been studied by conducting the experiments at various fluxes of the electrolyte through the electrodes, in terms of charge circulated through the cell, captured lithium per unit mass of material and overvoltage. The result shows that flowing the electrolyte inside the cell improves the lithium capture, in particular at low lithium concentration. Indeed, in Atacama feed solution, at 40 mM of lithium, the amount of lithium captured does not increase considerably with the flux of the electrolyte. Instead, when the concentration of the lithium ions is 5 mM, the amount of captured lithium in a single capture cycle increases by increasing the flux, thus leading to the conclusion that the slowest step in the process is the transport of the lithium ion in the liquid phase. Furthermore, an influence of the concentration of other cations in solution on the process performance was observed. In particular, the capturing of the lithium using a different concentration of NaCl together with 5 mM of LiCl was performed, and the results show that the presence of NaCl limits the amount of the captured lithium. Further studies can be performed in order to understand why the full capacity of the material is not reached at the highest flow rate. This is probably due to the porous structure of the material since the liquid phase is likely not affected by the convection flow inside the pores. This work proves that electrochemical ion pumping, with a suitable hydrodynamic design, enables the recovery of lithium from feed solutions at the lower concentration than the sources that are currently exploited, down to 1 mM.Keywords: desalination battery, electrochemical ion pumping, hydrodynamic, lithium
Procedia PDF Downloads 208712 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124711 Integrating Reactive Chlorine Species Generation with H2 Evolution in a Multifunctional Photoelectrochemical System for Low Operational Carbon Emissions Saline Sewage Treatment
Authors: Zexiao Zheng, Irene M. C. Lo
Abstract:
Organic pollutants, ammonia, and bacteria are major contaminants in sewage, which may adversely impact ecosystems without proper treatment. Conventional wastewater treatment plants (WWTPs) are operated to remove these contaminants from sewage but suffer from high carbon emissions and are powerless to remove emerging organic pollutants (EOPs). Herein, we have developed a low operational carbon emissions multifunctional photoelectrochemical (PEC) system for saline sewage treatment to simultaneously remove organic compounds, ammonia, and bacteria, coupled with H2 evolution. A reduced BiVO4 (r-BiVO4) with improved PEC properties due to the construction of oxygen vacancies and V4+ species was developed for the multifunctional PEC system. The PEC/r-BiVO4 process could treat saline sewage to meet local WWTPs’ discharge standard in 40 minutes at 2.0 V vs. Ag/AgCl and completely degrade carbamazepine (one of the EOPs), coupled with significant evolution of H2. A remarkable reduction in operational carbon emissions was achieved by the PEC/r-BiVO4 process compared with large-scale WWTPs, attributed to the restrained direct carbon emissions from the generation of greenhouse gases. Mechanistic investigation revealed that the PEC system could activate chloride ions in sewage to generate reactive chlorine species and facilitate •OH production, promoting contaminants removal. The PEC system exhibited operational feasibility at different pH and total suspended solids concentrations and has outstanding reusability and stability, confirming its promising practical potential. The study combined the simultaneous removal of three major contaminants from saline sewage and H2 evolution in a single PEC process, demonstrating a viable approach to supplementing and extending the existing wastewater treatment technologies. The study generated profound insights into the in-situ activation of existing chloride ions in sewage for contaminants removal and offered fundamental theories for applying the PEC system in sewage remediation with low operational carbon emissions. The developed PEC system can fit well with the future needs of wastewater treatment because of the following features: (i) low operational carbon emissions, benefiting the carbon neutrality process; (ii) higher quality of the effluent due to the elimination of EOPs; (iii) chemical-free in the operation of sewage treatment; (iv) easy reuse and recycling without secondary pollution.Keywords: contaminants removal, H2 evolution, multifunctional PEC system, operational carbon emissions, saline sewage treatment, r-BiVO4 photoanodes
Procedia PDF Downloads 157710 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 39709 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 215708 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat
Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh
Abstract:
Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences
Procedia PDF Downloads 426707 Comparison of Phytochemicals in Grapes and Wine from Shenton Park Winery
Authors: Amanda Sheard, Garry Lee, Katherine Stockham
Abstract:
Introduction: Health benefits associated with wine consumption have been well documented; these include anticancer, anti-inflammatory, and cardiovascular protection. The majority of these health benefits have been linked to polyphenols found within wine and grapes. Once consumed polyphenols exhibit free radical quenching capabilities. Environmental factors such as rainfall, temperature, CO2 levels and sunlight exposure have been shown to affect the polyphenol content of grapes. The objective of this work was to evaluate the effect of growing conditions on the antioxidant capacity of grapes obtained from a single plot vineyard in Perth. This was achieved through the analysis of samples using; oxygen radical antioxidant capacity (ORAC), cellular antioxidant activity (CAA) in human red blood cells, ICP-MS and ICP-OES, total polyphenols (PP’s), and total flavonoid’s (FLa). The data obtained was compared to observed climate data. The 14 Selected Vitis Vinefera L. cultivars included Cabernet franc, Cabernet Sauvignon, Carnelian, Chardonnay, Grenache, Melbec, Merlot, Orange muscat, Rousanne, Sauvignon Blanc, Shiraz, Tempernillo, Verdelho, and Voignier. Results: Notable variation’s between cultivars included results ranging from 125 mg/100 g-350 mg/100 g for PP’s, 93 mg/100 g–300 mg/100 g for FLa, 13 mM T.E/kg–33 mM T.E/kg for ORAC and 0.3 mM Q.E/kg–27 mM Q.E/kg CAA were found between red and white grape cultivars. No correlation was found between CAA and the ORAC obtained in this study; except that white cultivars were consistently lower than red. ICP analysis showed that seeds contained the highest concentration of copper followed by skins and flesh of the grape. A positive correlation between copper and ORAC was found. The ORAC, PP’s, and FLa in red grapes were consistently higher than white grape cultivars; these findings were supported by literature values. Significance: The cellular antioxidant activities of white and red wine cultivars were used to compare the bioactivity of these grapes against the chemical ORAC measurement. The common method of antioxidant activity measurement is the chemical value from ORAC analysis; however this may not reflect the activity within the human body. Hence, the measurements were also carried out using the cellular antioxidant activity to perform a comparison. Additionally, the study explored the influence of weather systems such as El Niño and La Niña on the polyphenol content of Australian wine cultivars grown in Perth.Keywords: oxygen radical antioxidant activity, cellular antioxidant activity, total polyphenols, total flavonoids, wine grapes, climate
Procedia PDF Downloads 290706 Supply Chain Design: Criteria Considered in Decision Making Process
Authors: Lenka Krsnakova, Petr Jirsak
Abstract:
Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria
Procedia PDF Downloads 325705 Association between Appearance Schemas and Personality
Authors: Berta Rodrigues Maia, Mariana Marques, Frederica Carvalho
Abstract:
Introduction: Personality traits play is related to many forms of psychological distress, such as body dissatisfaction. Aim: To explore the associations between appearance schemas and personality traits. Method: 494 Portuguese university students (80.2% females, and 99.2% single), with a mean age of 20.17 years old (SD = 1.77; range: 18-20), filled in the appearance schemas inventory-revised, the NEO personality inventory (a Portuguese short version), and the composite multidimensional perfectionism scale. Results: An independent-samples t-test was conducted to compare the scores in appearance schemas by sex, with a significant difference being found in self-evaluation salience scores [females (M = 37.99, SD = 7.82); males (M = 35.36, SD = 6.60); t (489) = -3.052, p = .002]. Finally, there was no significant difference in motivational salience scores, by sex [females (M = 27.67, SD = 4.84); males (M = 26.70, SD = 4.99); t (489) = -1.748, p = .081]. Having conducted correlations separately, by sex, self-evaluation salience was positively correlated with concern over mistakes (r = .27), doubts about actions (r = .35), and socially prescribed perfectionism (r = .23). moreover, for females, self-evaluation salience was positively correlated with concern over mistakes (r = .34), personal standards (r = .25), doubts about actions (r = .33), parental expectations (r = .24), parental criticism (r = .24), organization (r = .11), socially prescribed perfectionism (r = .31), self-oriented perfectionism (r = .32), and neuroticism (r = .33). concerning motivational salience, in the total sample (not separately, by sex), this scale/dimension significantly correlated with conscientiousness (r = . 18), personal standards (r = .23), socially prescribed perfectionism (r = . 10), and self-oriented perfectionism (r = .29). All correlations were significant at a level of significance of 0.01 (2-tailed), except for socially prescribed perfectionism. All the other correlations (with neuroticism, extroversion, openness, agreeableness, concern over mistakes, doubts about actions, parental expectations, and parental criticism) were not significant. Conclusions: Females seem to value more their self-appearance than males, and, in females, the salience of appearance in life seems to be associated with maladaptive perfectionism, as well as with adaptive perfectionism. In males, the salience of appearance was only related to adaptive perfectionism. These results seem to show that males are more concerned with their own standards regarding appearance, while for females, other's standards are also relevant. In females, the level of the salience of appearance in life seems to relate to the experience of feelings, such as anxiety and depression (neuroticism). The motivation to improve appearance seemed to be particularly related, in both sexes, to adaptive perfectionism (in a general way concerning more the personal standards). Longitudinal studies are needed to clarify the causality of the results. Acknowledgment: This study was carried out under the strategic project of the Centre for Philosophical and Humanistic Studies (CEFH) UID/FIL/00683/2019, funded by the Fundação para a Ciência e a Tecnologia (FCT).Keywords: appearance schemas, personality traits, university students, sex
Procedia PDF Downloads 129704 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction
Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari
Abstract:
Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance
Procedia PDF Downloads 256703 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada
Authors: Bilel Chalghaf, Mathieu Varin
Abstract:
Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR
Procedia PDF Downloads 134702 Digital Image Correlation: Metrological Characterization in Mechanical Analysis
Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano
Abstract:
The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.Keywords: accuracy, deformation, image correlation, mechanical analysis
Procedia PDF Downloads 311701 A Look into Surgical Site Infections: Impact of Collective Interventions
Authors: Lisa Bennett, Cynthia Walters, Cynthia Argani, Andy Satin, Geeta Sood, Kerri Huber, Lisa Grubb, Woodrow Noble, Melissa Eichelberger, Darlene Zinalabedini, Eric Ausby, Jeffrey Snyder, Kevin Kirchoff
Abstract:
Background: Surgical site infections (SSIs) within the obstetric population pose a variety of complications, creating clinical and personal challenges for the new mother and her neonate during the postpartum period. Our journey to achieve compliance with the SSI core measure for cesarean sections revealed many opportunities to improve these outcomes. Objective: Achieve and sustain core measure compliance keeping surgical site infection rates below the national benchmark pooled mean of 1.8% in post-operative patients, who delivered via cesarean section at the Johns Hopkins Bayview Medical Center. Methods: A root cause analysis was performed and revealed several environmental, pharmacologic, and clinical practice opportunities for improvement. A multidisciplinary approach led by the OB Safety Nurse, OB Medical Director, and Infectious Disease Department resulted in the implementation of fourteen interventions over a twenty-month period. Interventions included: post-operative dressing changes, standardizing operating room attire, broadening pre-operative antibiotics, initiating vaginal preps, improving operating room terminal cleaning, testing air quality, and re-educating scrub technicians on technique. Results: Prior to the implementation of our interventions, the SSI quarterly rate in Obstetrics peaked at 6.10%. Although no single intervention resulted in dramatic improvement, after implementation of all fourteen interventions, the quarterly SSI rate has subsequently ranged from to 0.0% to 2.70%. Significance: Taking an introspective look at current practices can reveal opportunities for improvement which previously were not considered. Collectively the benefit of these interventions has shown a significant decrease in surgical site infection rates. The impact of this quality improvement project highlights the synergy created when members of the multidisciplinary team work in collaboration to improve patient safety, and achieve a high quality of care.Keywords: cesarean section, surgical site infection, collaboration and teamwork, patient safety, quality improvement
Procedia PDF Downloads 482700 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 63699 Safety Evaluation of Intramuscular Administration of Zuprevo® Compared to Draxxin® in the Treatment of Swine Respiratory Disease at Weaning Age
Authors: Josine Beek, S. Agten, R. Del Pozo, B. Balis
Abstract:
The objective of the present study was to compare the safety of intramuscular administration of Zuprevo® (tildipirosin, 40 mg/mL) with Draxxin® (tulathromycin, 100 mg/mL) in the treatment of swine respiratory disease at weaning age. The trial was carried out in two farrow-to-finish farms with 300 sows (farm A) and 500 sows (farm B) in a batch-production system. Farm A had no history of respiratory problems, whereas farm B had a history of respiratory outbreaks with increased mortality ( > 2%) in the nursery. Both farms were positive to Pasteurella multocida, Bordetella bronchiseptica, Actinobacillus pleuropneumoniae and Haemophilus parasuis. From each farm, one batch of piglets was included (farm A: 644 piglets; farm B: 963 piglets). One day before weaning (day 0; 18-21 days of age), piglets were identified by an individual ear tag and randomly assigned to a treatment group. At day 0, Group 1 was treated with a single intramuscular injection with Zuprevo® (tildipirosin, 40 mg/mL; 1 mL/10 kg) and group 2 with Draxxin® (tulathromycin, 100 mg/mL; 1 mL/40 kg). For practical reasons, dosage of the product was adjusted according to three weight categories: < 4 kg, 4-6 kg and > 6 kg. Within each farm, piglets of both groups were comingled at weaning and subsequently managed and located in the same facilities and with identical environmental conditions. Our study involved the period from day 0 until 10 weeks of age. Safety of treatment was evaluated by 1) visual examination for signs of discomfort directly after treatment and after 15 min, 1 h and 24 h and 2) mortality rate within 24 h after treatment. Efficacy of treatment was evaluated based on mortality rate from day 0 until 10 weeks of age. Each piglet that died during the study period was necropsied by the herd veterinarian to determine the probable cause of death. Data were analyzed using binary logistic regression and differences were considered significant if p < 0.05. The pig was the experimental unit. In total, 848 piglets were treated with tildipirosin and 759 piglets with tulathromycin. In farm A, one piglet with retarded growth ( < 1 kg at 18 days of age) showed an adverse reaction after injection of tildipirosin: lateral recumbence and dullness for ± 30 sec. The piglet recovered after 1-2 min. This adverse reaction was probably due to overdosing (12 mg/kg). No adverse effect of treatment was observed in any other piglet. There was no mortality within 24 h after treatment. No significant difference was found in mortality rate between both groups from day 0 until 10 weeks of age. In farm A, overall mortality rate was 0.3% (2/644). In farm B, mortality rate was 0.2% (1/502) in group 1 (tildipirosin) and 0.9% (4/461) in group 2 (tulathromycin)(p=0.60). The necropsy of piglets that died during the study period revealed no macroscopic lesions of the respiratory tract. In conclusion, Zuprevo® (tildipirosin, 40 mg/mL) was shown to be a safe and efficacious alternative to Draxxin® (tulathromycin, 100 mg/mL) for the early treatment of swine respiratory disease at weaning age.Keywords: antibiotic treatment, safety, swine respiratory disease, tildipirosin
Procedia PDF Downloads 395698 Term Creation in Specialized Fields: An Evaluation of Shona Phonetics and Phonology Terminology at Great Zimbabwe University
Authors: Peniah Mabaso-Shamano
Abstract:
The paper evaluates Shona terms that were created to teach Phonetics and Phonology courses at Great Zimbabwe University (GZU). The phonetics and phonology terms to be discussed in this paper were created using different processes and strategies such as translation, borrowing, neologising, compounding, transliteration, circumlocution among many others. Most phonetics and phonology terms are alien to Shona and as a result, there are no suitable Shona equivalents. The lecturers and students for these courses have a mammoth task of creating terminology for the different modules offered in Shona and other Zimbabwean indigenous languages. Most linguistic reference books are written in English. As such, lecturers and students translate information from English to Shona, a measure which is proving to be too difficult for them. A term creation workshop was held at GZU to try to address the problem of lack of terminology in indigenous languages. Different indigenous language practitioners from different tertiary institutions convened for a two-day workshop at GZU. Due to the 'specialized' nature of phonetics and phonology, it was too difficult to come up with 'proper' indigenous terms. The researcher will consult tertiary institutions lecturers who teach linguistics courses and linguistics students to get their views on the created terms. The people consulted will not be the ones who took part in the term creation workshop held at GZU. The selected participants will be asked to evaluate and back-translate some of the terms. In instances where they feel the terms created are not suitable or user-friendly, they will be asked to suggest other terms. Since the researcher is also a linguistics lecturer, her observation and views will be important. From her experience in using some of the terms in teaching phonetics and phonology courses to undergraduate students, the researcher noted that most of the terms created have shortcomings since they are not user-friendly. These shortcomings include terms longer than the English terms as some terms are translated to Shona through a whole statement. Most of these terms are neologisms, compound neologisms, transliterations, circumlocutions, and blends. The paper will show that there is overuse of transliterated terms due to the lack of Shona equivalents for English terms. Most single English words were translated into compound neologisms or phrases after attempts to reduce them to one word terms failed. In other instances, circumlocution led to the problem of creating longer terms than the original and as a result, the terms are not user-friendly. The paper will discuss and evaluate the different phonetics and phonology terms created and the different strategies and processes used in creating them.Keywords: blending, circumlocution, term creation, translation
Procedia PDF Downloads 147697 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety
Authors: Atheer Al-Nuaimi, Harry Evdorides
Abstract:
Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety
Procedia PDF Downloads 240696 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana
Authors: Gautier Viaud, Paul-Henry Cournède
Abstract:
Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models
Procedia PDF Downloads 303695 Creativity and Innovation in Postgraduate Supervision
Authors: Rajendra Chetty
Abstract:
The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.Keywords: interdisciplinarity, internationalisation, postgraduate, supervision
Procedia PDF Downloads 238694 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation
Authors: Nawras Kurzom, Avi Mendelsohn
Abstract:
The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.Keywords: musical tension, declarative memory, learning and memory, musical perception
Procedia PDF Downloads 98