Search results for: starlight frequency reduction
897 A Systematic Review on the Whole-Body Cryotherapy versus Control Interventions for Recovery of Muscle Function and Perceptions of Muscle Soreness Following Exercise-Induced Muscle Damage in Runners
Authors: Michael Nolte, Iwona Kasior, Kala Flagg, Spiro Karavatas
Abstract:
Background: Cryotherapy has been used as a post-exercise recovery modality for decades. Whole-body cryotherapy (WBC) is an intervention which involves brief exposures to extremely cold air in order to induce therapeutic effects. It is currently being investigated for its effectiveness in treating certain exercise-induced impairments. Purpose: The purpose of this systematic review was to determine whether WBC as a recovery intervention is more, less, or equally as effective as other interventions at reducing perceived levels of muscle soreness and promoting recovery of muscle function after exercise-induced muscle damage (EIMD) from running. Methods: A systematic review of the current literature was performed utilizing the following MeSH terms: cryotherapy, whole-body cryotherapy, exercise-induced muscle damage, muscle soreness, muscle recovery, and running. The databases utilized were PubMed, CINAHL, EBSCO Host, and Google Scholar. Articles were included if they were published within the last ten years, had a CEBM level of evidence of IIb or higher, had a PEDro scale score of 5 or higher, studied runners as primary subjects, and utilized both perceived levels of muscle soreness and recovery of muscle function as dependent variables. Articles were excluded if subjects did not include runners, if the interventions included PBC instead of WBC, and if both muscle performance and perceived muscle soreness were not assessed within the study. Results: Two of the four articles revealed that WBC was significantly more effective than treatment interventions such as far-infrared radiation and passive recovery at reducing perceived levels of muscle soreness and restoring muscle power and endurance following simulated trail runs and high-intensity interval running, respectively. One of the four articles revealed no significant difference between WBC and passive recovery in terms of reducing perceived muscle soreness and restoring muscle power following sprint intervals. One of the four articles revealed that WBC had a harmful effect compared to CWI and passive recovery on both perceived muscle soreness and recovery of muscle strength and power following a marathon. Discussion/Conclusion: Though there was no consensus in terms of WBC’s effectiveness at treating exercise-induced muscle damage following running compared to other interventions, it seems as though WBC may at least have a time-dependent positive effect on muscle soreness and recovery following high-intensity interval runs and endurance running, marathons excluded. More research needs to be conducted in order to determine the most effective way to implement WBC as a recovery method for exercise-induced muscle damage, including the optimal temperature, timing, duration, and frequency of treatment.Keywords: cryotherapy, physical therapy intervention, physical therapy, whole body cryotherapy
Procedia PDF Downloads 241896 Application of Response Surface Methodology to Assess the Impact of Aqueous and Particulate Phosphorous on Diazotrophic and Non-Diazotrophic Cyanobacteria Associated with Harmful Algal Blooms
Authors: Elizabeth Crafton, Donald Ott, Teresa Cutright
Abstract:
Harmful algal blooms (HABs), more notably cyanobacteria-dominated HABs, compromise water quality, jeopardize access to drinking water and are a risk to public health and safety. HABs are representative of ecosystem imbalance largely caused by environmental changes, such as eutrophication, that are associated with the globally expanding human population. Cyanobacteria-dominated HABs are anticipated to increase in frequency, magnitude, and are predicted to plague a larger geographical area as a result of climate change. The weather pattern is important as storm-driven, pulse-input of nutrients have been correlated to cyanobacteria-dominated HABs. The mobilization of aqueous and particulate nutrients and the response of the phytoplankton community is an important relationship in this complex phenomenon. This relationship is most apparent in high-impact areas of adequate sunlight, > 20ᵒC, excessive nutrients and quiescent water that corresponds to ideal growth of HABs. Typically the impact of particulate phosphorus is dismissed as an insignificant contribution; which is true for areas that are not considered high-impact. The objective of this study was to assess the impact of a simulated storm-driven, pulse-input of reactive phosphorus and the response of three different cyanobacteria assemblages (~5,000 cells/mL). The aqueous and particulate sources of phosphorus and changes in HAB were tracked weekly for 4 weeks. The first cyanobacteria composition consisted of Planktothrix sp., Microcystis sp., Aphanizomenon sp., and Anabaena sp., with 70% of the total population being non-diazotrophic and 30% being diazotrophic. The second was comprised of Anabaena sp., Planktothrix sp., and Microcystis sp., with 87% diazotrophic and 13% non-diazotrophic. The third composition has yet to be determined as these experiments are ongoing. Preliminary results suggest that both aqueous and particulate sources are contributors of total reactive phosphorus in high-impact areas. The results further highlight shifts in the cyanobacteria assemblage after the simulated pulse-input. In the controls, the reactors dosed with aqueous reactive phosphorus maintained a constant concentration for the duration of the experiment; whereas, the reactors that were dosed with aqueous reactive phosphorus and contained soil decreased from 1.73 mg/L to 0.25 mg/L of reactive phosphorus from time zero to 7 days; this was higher than the blank (0.11 mg/L). Suggesting a binding of aqueous reactive phosphorus to sediment, which is further supported by the positive correlation observed between total reactive phosphorus concentration and turbidity. The experiments are nearly completed and a full statistical analysis will be completed of the results prior to the conference.Keywords: Anabaena, cyanobacteria, harmful algal blooms, Microcystis, phosphorous, response surface methodology
Procedia PDF Downloads 167895 Enhancement of Fracture Toughness for Low-Temperature Applications in Mild Steel Weldments
Authors: Manjinder Singh, Jasvinder Singh
Abstract:
Existing theories of Titanic/Liberty ship, Sydney bridge accidents and practical experience generated an interest in developing weldments those has high toughness under sub-zero temperature conditions. The purpose was to protect the joint from undergoing DBT (Ductile to brittle transition), when ambient temperature reach sub-zero levels. Metallurgical improvement such as low carbonization or addition of deoxidization elements like Mn and Si was effective to prevent fracture in weldments (crack) at low temperature. In the present research, an attempt has been made to investigate the reason behind ductile to brittle transition of mild steel weldments when subjected to sub-zero temperatures and method of its mitigation. Nickel is added to weldments using manual metal arc welding (MMAW) preventing the DBT, but progressive reduction in charpy impact values as temperature is lowered. The variation in toughness with respect to nickel content being added to the weld pool is analyzed quantitatively to evaluate the rise in toughness value with increasing nickel amount. The impact performance of welded specimens was evaluated by Charpy V-notch impact tests at various temperatures (20 °C, 0 °C, -20 °C, -40 °C, -60 °C). Notch is made in the weldments, as notch sensitive failure is particularly likely to occur at zones of high stress concentration caused by a notch. Then the effect of nickel to weldments is investigated at various temperatures was studied by mechanical and metallurgical tests. It was noted that a large gain in impact toughness could be achieved by adding nickel content. The highest yield strength (462J) in combination with good impact toughness (over 220J at – 60 °C) was achieved with an alloying content of 16 wt. %nickel. Based on metallurgical behavior it was concluded that the weld metals solidify as austenite with increase in nickel. The microstructure was characterized using optical and high resolution SEM (scanning electron microscopy). At inter-dendritic regions mainly martensite was found. In dendrite core regions of the low carbon weld metals a mixture of upper bainite, lower bainite and a novel constituent coalesced bainite formed. Coalesced bainite was characterized by large bainitic ferrite grains with cementite precipitates and is believed to form when the bainite and martensite start temperatures are close to each other. Mechanical properties could be rationalized in terms of micro structural constituents as a function of nickel content.Keywords: MMAW, Toughness, DBT, Notch, SEM, Coalesced bainite
Procedia PDF Downloads 526894 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160893 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 128892 Reducing Unnecessary CT Aorta Scans in the Emergency Department
Authors: Ibrahim Abouelkhir
Abstract:
Background: Prior to this project, the number of CT aorta requests from our Emergency Department (ED) was reported by the radiology department to be high with a low positive event rate: only 1- 2% of CT aortas performed were positive for acute aortic syndrome. This trend raised concerns about the time required to process and report these scans, potentially impacting the timely reporting of other high-priority imaging, such as trauma-related scans. Other harms identified were unnecessary radiation, patients spending longer in ED contributing to overcrowding, and, most importantly, the patient not getting the right care the first time. The radiology department also raised the problem of reporting bias because they expected our CT aortas to be normal. Aim: The main aim of this project was to reduce the number of unnecessary CT aortas requested, which would be shown by 1. Number of CT aortas requested and 2. Positive event rate. Methodology: This was a quality improvement project carried out in the ED at Frimley Park Hospital, UK. Starting from 1 st January 2024, we recorded the number of days required to reach 35 CT aorta requests. We looked at all patients presenting to the ED over the age of 16 for whom a CT aorta was requested by the ED team. We looked at how many of these scans were positive for acute aortic syndrome. The intervention was a change in practice: all CT aortas should be approved by an ED consultant or ST4+ registrar (5th April 2024). We then reviewed the number of days it took to reach a total of 35 CT aorta requests following the intervention and again reviewed how many were positive. Results: Prior to the intervention, 35 CT Aorta scans were performed over a 20-day period. Following the implementation of the ED senior doctor vetting process, the same number of CT Aorta scan requests was observed over 50 days - more than twice the pre-intervention period. This indicates a significant reduction in the rate of CT Aorta scans being requested. During the pre-intervention phase, there were two positive cases of acute aortic syndrome. In the post-intervention period, there were zero. Conclusion: The mandatory review of CT Aorta scan requested by the ED consultant effectively reduced the number of scans requested. However, this intervention did not lead to an increase in positive scan results. We noted that post-intervention, approximately 50% of scans had been approved by registrar-grade doctors and, only 50% had been approved by ED consultants, and the majority were not in-person reviews. We wonder if restricting the approval to consultant grade only might improve the results, and furthermore, in person reviews should be the gold standard.Keywords: quality improvement project, CT aorta scans, emergency department, radiology department, aortic dissection, scan request vetting, clinical outcomes, imaging efficiency
Procedia PDF Downloads 10891 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 154890 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 378889 Isolation of Nitrosoguanidine Induced NaCl Tolerant Mutant of Spirulina platensis with Improved Growth and Phycocyanin Production
Authors: Apurva Gupta, Surendra Singh
Abstract:
Spirulina spp., as a promising source of many commercially valuable products, is grown photo autotrophically in open ponds and raceways on a large scale. However, the economic exploitation in an open system seems to have been limited because of lack of multiple stress-tolerant strains. The present study aims to isolate a stable stress tolerant mutant of Spirulina platensis with improved growth rate and enhanced potential to produce its commercially valuable bioactive compounds. N-methyl-n'-nitro-n-nitrosoguanidine (NTG) at 250 μg/mL (concentration permitted 1% survival) was employed for chemical mutagenesis to generate random mutants and screened against NaCl. In a preliminary experiment, wild type S. platensis was treated with NaCl concentrations from 0.5-1.5 M to calculate its LC₅₀. Mutagenized colonies were then screened for tolerance at 0.8 M NaCl (LC₅₀), and the surviving colonies were designated as NaCl tolerant mutants of S. platensis. The mutant cells exhibited 1.5 times improved growth against NaCl stress as compared to the wild type strain in control conditions. This might be due to the ability of the mutant cells to protect its metabolic machinery against inhibitory effects of salt stress. Salt stress is known to adversely affect the rate of photosynthesis in cyanobacteria by causing degradation of the pigments. Interestingly, the mutant cells were able to protect its photosynthetic machinery and exhibited 4.23 and 1.72 times enhanced accumulation of Chl a and phycobiliproteins, respectively, which resulted in enhanced rate of photosynthesis (2.43 times) and respiration (1.38 times) against salt stress. Phycocyanin production in mutant cells was observed to enhance by 1.63 fold. Nitrogen metabolism plays a vital role in conferring halotolerance to cyanobacterial cells by influx of nitrate and efflux of Na+ ions from the cell. The NaCl tolerant mutant cells took up 2.29 times more nitrate as compared to the wild type and efficiently reduce it. Nitrate reductase and nitrite reductase activity in the mutant cells also improved by 2.45 and 2.31 times, respectively against salt stress. From these preliminary results, it could be deduced that enhanced nitrogen uptake and its efficient reduction might be a reason for adaptive and halotolerant behavior of the S. platensis mutant cells. Also, the NaCl tolerant mutant of S. platensis with significant improved growth and phycocyanin accumulation compared to the wild type can be commercially promising.Keywords: chemical mutagenesis, NaCl tolerant mutant, nitrogen metabolism, photosynthetic machinery, phycocyanin
Procedia PDF Downloads 168888 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study
Authors: Yongrong Liu, Xin Tang
Abstract:
Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study
Procedia PDF Downloads 74887 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 107886 Determining the Thermal Performance and Comfort Indices of a Naturally Ventilated Room with Reduced Density Reinforced Concrete Wall Construction over Conventional M-25 Grade Concrete
Authors: P. Crosby, Shiva Krishna Pavuluri, S. Rajkumar
Abstract:
Purpose: Occupied built-up space can be broadly classified as air-conditioned and naturally ventilated. Regardless of the building type, the objective of all occupied built-up space is to provide a thermally acceptable environment for human occupancy. Considering this aspect, air-conditioned spaces allow a greater degree of flexibility to control and modulate the comfort parameters during the operation phase. However, in the case of naturally ventilated space, a number of design features favoring indoor thermal comfort should be mandatorily conceptualized starting from the design phase. One such primary design feature that requires to be prioritized is, selection of building envelope material, as it decides the flow of energy from outside environment to occupied spaces. Research Methodology: In India and many countries across globe, the standardized material used for building envelope is re-enforced concrete (i.e. M-25 grade concrete). The comfort inside the RC built environment for warm & humid climate (i.e. mid-day temp of 30-35˚C, diurnal variation of 5-8˚C & RH of 70-90%) is unsatisfying to say the least. This study is mainly focused on reviewing the impact of mix design of conventional M25 grade concrete on inside thermal comfort. In this mix design, air entrainment in the range of 2000 to 2100 kg/m3 is introduced to reduce the density of M-25 grade concrete. Thermal performance parameters & indoor comfort indices are analyzed for the proposed mix and compared in relation to the conventional M-25 grade. There are diverse methodologies which govern indoor comfort calculation. In this study, three varied approaches specifically a) Indian Adaptive Thermal comfort model, b) Tropical Summer Index (TSI) c) Air temperature less than 33˚C & RH less than 70% to calculate comfort is adopted. The data required for the thermal comfort study is acquired by field measurement approach (i.e. for the new mix design) and simulation approach by using design builder (i.e. for the conventional concrete grade). Findings: The analysis points that the Tropical Summer Index has a higher degree of stringency in determining the occupant comfort band whereas also providing a leverage in thermally tolerable band over & above other methodologies in the context of the study. Another important finding is the new mix design ensures a 10% reduction in indoor air temperature (IAT) over the outdoor dry bulb temperature (ODBT) during the day. This translates to a significant temperature difference of 6 ˚C IAT and ODBT.Keywords: Indian adaptive thermal comfort, indoor air temperature, thermal comfort, tropical summer index
Procedia PDF Downloads 320885 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate
Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi
Abstract:
Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI
Procedia PDF Downloads 324884 Applying GIS Geographic Weighted Regression Analysis to Assess Local Factors Impeding Smallholder Farmers from Participating in Agribusiness Markets: A Case Study of Vihiga County, Western Kenya
Authors: Mwehe Mathenge, Ben G. J. S. Sonneveld, Jacqueline E. W. Broerse
Abstract:
Smallholder farmers are important drivers of agriculture productivity, food security, and poverty reduction in Sub-Saharan Africa. However, they are faced with myriad challenges in their efforts at participating in agribusiness markets. How the geographic explicit factors existing at the local level interact to impede smallholder farmers' decision to participates (or not) in agribusiness markets is not well understood. Deconstructing the spatial complexity of the local environment could provide a deeper insight into how geographically explicit determinants promote or impede resource-poor smallholder farmers from participating in agribusiness. This paper’s objective was to identify, map, and analyze local spatial autocorrelation in factors that impede poor smallholders from participating in agribusiness markets. Data were collected using geocoded researcher-administered survey questionnaires from 392 households in Western Kenya. Three spatial statistics methods in geographic information system (GIS) were used to analyze data -Global Moran’s I, Cluster and Outliers Analysis (Anselin Local Moran’s I), and geographically weighted regression. The results of Global Moran’s I reveal the presence of spatial patterns in the dataset that was not caused by spatial randomness of data. Subsequently, Anselin Local Moran’s I result identified spatially and statistically significant local spatial clustering (hot spots and cold spots) in factors hindering smallholder participation. Finally, the geographically weighted regression results unearthed those specific geographic explicit factors impeding market participation in the study area. The results confirm that geographically explicit factors are indispensable in influencing the smallholder farming decisions, and policymakers should take cognizance of them. Additionally, this research demonstrated how geospatial explicit analysis conducted at the local level, using geographically disaggregated data, could help in identifying households and localities where the most impoverished and resource-poor smallholder households reside. In designing spatially targeted interventions, policymakers could benefit from geospatial analysis methods in understanding complex geographic factors and processes that interact to influence smallholder farmers' decision-making processes and choices.Keywords: agribusiness markets, GIS, smallholder farmers, spatial statistics, disaggregated spatial data
Procedia PDF Downloads 139883 An Overview of PFAS Treatment Technologies with an In-Depth Analysis of Two Case Studies
Authors: Arul Ayyaswami, Vidhya Ramalingam
Abstract:
Per- and polyfluoroalkyl substances (PFAS) have emerged as a significant environmental concern due to their ubiquity and persistence in the environment. Their chemical characteristics and adverse effects on human health demands more effective and sustainable solutions in remediation of the PFAS. The work presented here encompasses an overview of treatment technologies with two case studies that utilize effective approaches in addressing PFAS contaminated media. Currently the options for treatment of PFAS compounds include Activated carbon adsorption, Ion Exchange, Membrane Filtration, Advanced oxidation processes, Electrochemical treatment, and Precipitation and Coagulation. In the first case study, a pilot study application of colloidal activated carbon (CAC) was completed to address PFAS from aqueous film-forming foam (AFFF) used to extinguish a large fire. The pilot study was used to demonstrate the effectiveness of a CAC in situ permeable reactive barrier (PRB) in effectively stopping the migration of PFOS and PFOA, moving from the source area at high concentrations. Before the CAC PRB installation, an injection test using - fluorescein dye was conducted to determine the primary fracture-induced groundwater flow pathways. A straddle packer injection delivery system was used to isolate discrete intervals and gain resolution over the 70 feet saturated zone targeted for treatment. Flow rates were adjusted, and aquifer responses were recorded for each interval. The results from the injection test were used to design the pilot test injection plan using CAC PRB. Following the CAC PRB application, the combined initial concentration 91,400 ng/L of PFOS and PFOA were reduced to approximately 70 ng/L (99.9% reduction), after only one month following the injection event. The results demonstrate the remedy's effectiveness to quickly and safely contain high concentrations of PFAS in fractured bedrock, reducing the risk to downgradient receptors. The second study involves developing a reductive defluorination treatment process using UV and electron acceptor. This experiment indicates a significant potential in treatment of PFAS contaminated waste media such as landfill leachates. The technology also shows a promising way of tacking these contaminants without the need for secondary waste disposal or any additional pre-treatments.Keywords: per- and polyfluoroalkyl substances (PFAS), colloidal activated carbon (CAC), destructive PFAS treatment technology, aqueous film-forming foam (AFFF)
Procedia PDF Downloads 60882 Factors Affecting Air Surface Temperature Variations in the Philippines
Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya
Abstract:
Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number
Procedia PDF Downloads 323881 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 112880 Evidence-Triggers for Care of Patients with Cleft Lip and Palate in Srinagarind Hospital: The Tawanchai Center and Out-Patients Surgical Room
Authors: Suteera Pradubwong, Pattama Surit, Sumalee Pongpagatip, Tharinee Pethchara, Bowornsilp Chowchuen
Abstract:
Background: Cleft lip and palate (CLP) is a congenital anomaly of the lip and palate that is caused by several factors. It was found in approximately one per 500 to 550 live births depending on nationality and socioeconomic status. The Tawanchai Center and out-patients surgical room of Srinagarind Hospital are responsible for providing care to patients with CLP (starting from birth to adolescent) and their caregivers. From the observations and interviews with nurses working in these units, they reported that both patients and their caregivers confronted many problems which affected their physical and mental health. Based on the Soukup’s model (2000), the researchers used evidence triggers from clinical practice (practice triggers) and related literature (knowledge triggers) to investigate the problems. Objective: The purpose of this study was to investigate the problems of care for patients with CLP in the Tawanchai Center and out-patient surgical room of Srinagarind Hospital. Material and Method: The descriptive method was used in this study. For practice triggers, the researchers obtained the data from medical records of ten patients with CLP and from interviewing two patients with CLP, eight caregivers, two nurses, and two assistant workers. Instruments for the interview consisted of a demographic data form and a semi-structured questionnaire. For knowledge triggers, the researchers used a literature search. The data from both practice and knowledge triggers were collected between February and May 2016. The quantitative data were analyzed through frequency and percentage distributions, and the qualitative data were analyzed through a content analysis. Results: The problems of care gained from practice and knowledge triggers were consistent and were identified as holistic issues, including 1) insufficient feeding, 2) risks of respiratory tract infections and physical disorders, 3) psychological problems, such as anxiety, stress, and distress, 4) socioeconomic problems, such as stigmatization, isolation, and loss of income, 5)spiritual problems, such as low self-esteem and low quality of life, 6) school absence and learning limitation, 7) lack of knowledge about CLP and its treatments, 8) misunderstanding towards roles among the multidisciplinary team, 9) no available services, and 10) shortage of healthcare professionals, especially speech-language pathologists (SLPs). Conclusion: From evidence-triggers, the problems of care affect the patients and their caregivers holistically. Integrated long-term care by the multidisciplinary team is needed for children with CLP starting from birth to adolescent. Nurses should provide effective care to these patients and their caregivers by using a holistic approach and working collaboratively with other healthcare providers in the multidisciplinary team.Keywords: evidence-triggers, cleft lip, cleft palate, problems of care
Procedia PDF Downloads 218879 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 148878 The Aromaticity of P-Substituted O-(N-Dialkyl)Aminomethylphenols
Authors: Khodzhaberdi Allaberdiev
Abstract:
Aromaticity, one of the most important concepts in organic chemistry, has attracted considerable interest from both experimentalists and theoreticians. The geometry optimization of p-substituted o-(N-dialkyl)aminomethylphenols, o-DEAMPH XC₆ H₅CH ₂Y (X=p-OCH₃, CH₃, H, F, Cl, Br, COCH₃, COOCH₃, CHO, CN and NO₂, Y=o-N (C₂H₅)₂, o-DEAMPHs have been performed in the gas phase using the B3LYP/6-311+G(d,p) level. Aromaticities of the considered molecules were investigated using different indices included geometrical (HOMA and Bird), electronic (FLU, PDI and SA) magnetic (NICS(0), NICS(1) and NICS(1)zz indices. The linear dependencies were obtained between some aromaticity indices. The best correlation is observed between the Bird and PDI indices (R² =0.9240). However, not all types of indices or even different indices within the same type correlate well among each other. Surprisingly, for studied molecules in which geometrical and electronic cannot correctly give the aromaticity of ring, the magnetism based index successfully predicts the aromaticity of systems. 1H NMR spectra of compounds were obtained at B3LYP/6–311+G(d,p) level using the GIAO method. Excellent linear correlation (R²= 0.9996) between values the chemical shift of hydrogen atom obtained experimentally of 1H NMR and calculated using B3LYP/6–311+G(d,p) demonstrates a good assignment of the experimental values chemical shift to the calculated structures of o-DEAMPH. It is found that the best linear correlation with the Hammett substituent constants is observed for the NICS(1)zz index in comparison with the other indices: NICS(1)zz =-21.5552+1,1070 σp- (R²=0.9394). The presence intramolecular hydrogen bond in the studied molecules also revealed changes the aromatic character of substituted o-DEAMPHs. The HOMA index predicted for R=NO2 the reduction in the π-electron delocalization of 3.4% was about double that observed for p-nitrophenol. The influence intramolecular H-bonding on aromaticity of benzene ring in the ground state (S0) are described by equations between NICS(1)zz and H-bond energies: experimental, Eₑₓₚ, predicted IR spectroscopical, Eν and topological, EQTAIM with correlation coefficients R² =0.9666, R² =0.9028 and R² =0.8864, respectively. The NICS(1)zz index also correlates with usual descriptors of the hydrogen bond, while the other indices do not give any meaningful results. The influence of the intramolecular H-bonding formation on the aromaticity of some substituted o-DEAMPHs is criteria to consider the multidimensional character of aromaticity. The linear relationships as well as revealed between NICS(1)zz and both pyramidality nitrogen atom, ΣN(C₂H₅)₂ and dihedral angle, φ CAr – CAr -CCH₂ –N, to characterizing out-of-plane properties.These results demonstrated the nonplanar structure of o-DEAMPHs. Finally, when considering dependencies of NICS(1)zz, were excluded data for R=H, because the NICS(1) and NICS(1)zz values are the most negative for unsubstituted DEAMPH, indicating its highest aromaticity; that was not the case for NICS(0) index.Keywords: aminomethylphenols, DFT, aromaticity, correlations
Procedia PDF Downloads 181877 Optimal Applications of Solar Energy Systems: Comparative Analysis of Ground-Mounted and Rooftop Solar PV Installations in Drought-Prone and Residential Areas of the Indian Subcontinent
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhyay
Abstract:
The increasing demand for environmentally friendly energy solutions highlights the need to optimize solar energy systems. This study compares two types of solar energy systems: ground-mounted solar panels for drought-prone locations and rooftop solar PV installations measuring 300 sq. ft. (approx. 28 sq. m.). The electricity output of 4730 kWh/year saves ₹ 14191/year. As a clean and sustainable energy source, solar power is pivotal in reducing greenhouse gas CO2 emissions reduction by 85 tonnes in 25 years and combating climate change. This effort, "PM Suryadaya Ghar-Muft Bijli Yojana," seeks to empower Indian homes by giving free access to solar energy. The initiative is part of the Indian government's larger attempt to encourage clean and renewable energy sources while reducing reliance on traditional fossil fuels. This report reviews various installations and government reports to analyse the performance and impact of both ground-mounted and rooftop solar systems. Besides, effectiveness of government subsidy programs for residential on-grid solar systems, including the ₹78,000 incentive for systems above 3 kW. The study also looks into the subsidy schemes available for domestic agricultural grid use. Systems up to 3 kW receive ₹43,764, while systems over 10 kW receive a fixed subsidy of ₹94,822. Households can save a substantial amount of energy and minimize their reliance on grid electricity by installing the proper solar plant capacity. In terms of monthly consumption at home, the acceptable Rooftop Solar Plant capacity for households is 0-150 units (1-2 kW), 150-300 units (2-3 kW), and >300 units (above 3 kW). Ground-mounted panels, particularly in arid regions, offer benefits such as scalability and optimal orientation but face challenges like land use conflicts and environmental impact, particularly in drought-prone regions. By evaluating the distinct advantages and challenges of each system, this study aims to provide insights into their optimal applications, guiding stakeholders in making informed decisions to enhance solar energy efficiency and sustainability within regulatory constraints. This research also explores the implications of regulations, such as Italy's ban on ground-mounted solar panels on productive agricultural land, on solar energy strategies.Keywords: sustainability, solar energy, subsidy, rooftop solar energy, renewable energy
Procedia PDF Downloads 48876 Assessment of Urban Environmental Noise in Urban Habitat: A Spatial Temporal Study
Authors: Neha Pranav Kolhe, Harithapriya Vijaye, Arushi Kamle
Abstract:
The economic growth engines are urban regions. As the economy expands, so does the need for peace and quiet, and noise pollution is one of the important social and environmental issue. Health and wellbeing are at risk from environmental noise pollution. Because of urbanisation, population growth, and the consequent rise in the usage of increasingly potent, diverse, and highly mobile sources of noise, it is now more severe and pervasive than ever before, and it will only become worse. Additionally, it will expand as long as there is an increase in air, train, and highway traffic, which continue to be the main contributors of noise pollution. The current study will be conducted in two zones of class I city of central India (population range: 1 million–4 million). Total 56 measuring points were chosen to assess noise pollution. The first objective evaluates the noise pollution in various urban habitats determined as formal and informal settlement. It identifies the comparison of noise pollution within the settlements using T- Test analysis. The second objective assess the noise pollution in silent zones (as stated in Central Pollution Control Board) in a hierarchical way. It also assesses the noise pollution in the settlements and compares with prescribed permissible limits using class I sound level equipment. As appropriate indices, equivalent noise level on the (A) frequency weighting network, minimum sound pressure level and maximum sound pressure level were computed. The survey is conducted for a period of 1 week. Arc GIS is used to plot and map the temporal and spatial variability in urban settings. It is discovered that noise levels at most stations, particularly at heavily trafficked crossroads and subway stations, were significantly different and higher than acceptable limits and squares. The study highlights the vulnerable areas that should be considered while city planning. The study demands area level planning while preparing a development plan. It also demands attention to noise pollution from the perspective of residential and silent zones. The city planning in urban areas neglects the noise pollution assessment at city level. This contributes to that, irrespective of noise pollution guidelines, the ground reality is far away from its applicability. The result produces incompatible land use on a neighbourhood scale with respect to noise pollution. The study's final results will be useful to policymakers, architects and administrators in developing countries. This will be useful for noise pollution in urban habitat governance by efficient decision making and policy formulation to increase the profitability of these systems.Keywords: noise pollution, formal settlements, informal settlements, built environment, silent zone, residential area
Procedia PDF Downloads 118875 Molecular Characterization of Listeria monocytogenes from Fresh Fish and Fish Products
Authors: Beata Lachtara, Renata Szewczyk, Katarzyna Bielinska, Kinga Wieczorek, Jacek Osek
Abstract:
Listeria monocytogenes is an important human and animal pathogen that causes foodborne outbreaks. The bacteria may be present in different types of food: cheese, raw vegetables, sliced meat products and vacuum-packed sausages, poultry, meat, fish. The most common method, which has been used for the investigation of genetic diversity of L. monocytogenes, is PFGE. This technique is reliable and reproducible and established as gold standard for typing of L. monocytogenes. The aim of the study was characterization by molecular serotyping and PFGE analysis of L. monocytogenes strains isolated from fresh fish and fish products in Poland. A total of 301 samples, including fresh fish (n = 129) and fish products (n = 172) were, collected between January 2014 and March 2016. The bacteria were detected using the ISO 11290-1 standard method. Molecular serotyping was performed with PCR. The isolates were tested with the PFGE method according to the protocol developed by the European Union Reference Laboratory for L. monocytogenes with some modifications. Based on the PFGE profiles, two dendrograms were generated for strains digested separately with two restriction enzymes: AscI and ApaI. Analysis of the fingerprint profiles was performed using Bionumerics software version 6.6 (Applied Maths, Belgium). The 95% of similarity was applied to differentiate the PFGE pulsotypes. The study revealed that 57 of 301 (18.9%) samples were positive for L. monocytogenes. The bacteria were identified in 29 (50.9%) ready-to-eat fish products and in 28 (49.1%) fresh fish. It was found that 40 (70.2%) strains were of serotype 1/2a, 14 (24.6%) 1/2b, two (4.3%) 4b and one (1.8%) 1/2c. Serotypes 1/2a, 1/2b, and 4b were presented with the same frequency in both categories of food, whereas serotype 1/2c was detected only in fresh fish. The PFGE analysis with AscI demonstrated 43 different pulsotypes; among them 33 (76.7%) were represented by only one strain. The remaining 10 profiles contained more than one isolate. Among them 8 pulsotypes comprised of two L. monocytogenes isolates, one profile of three isolates and one restriction type of 5 strains. In case of ApaI typing, the PFGE analysis showed 27 different pulsotypes including 17 (63.0%) types represented by only one strain. Ten (37.0%) clusters contained more than one strain among which four profiles covered two strains; three had three isolates, one with five strains, one with eight strains and one with ten isolates. It was observed that the isolates assigned to the same PFGE type were usually of the same serotype (1/2a or 1/2b). The majority of the clusters had strains of both sources (fresh fish and fish products) isolated at different time. Most of the strains grouped in one cluster of the AscI restriction was assigned to the same groups in ApaI investigation. In conclusion, PFGE used in the study showed a high genetic diversity among L. monocytogenes. The strains were grouped into varied clonal clusters, which may suggest different sources of contamination. The results demonstrated that 1/2a serotype was the most common among isolates from fresh fish and fish products in Poland.Keywords: Listeria monocytogenes, molecular characteristic, PFGE, serotyping
Procedia PDF Downloads 289874 Prevalence of Fast-Food Consumption on Overweight or Obesity on Employees (Age Between 25-45 Years) in Private Sector; A Cross-Sectional Study in Colombo, Sri Lanka
Authors: Arosha Rashmi De Silva, Ananda Chandrasekara
Abstract:
This study seeks to comprehensively examine the influence of fast-food consumption and physical activity levels on the body weight of young employees within the private sector of Sri Lanka. The escalating popularity of fast food has raised concerns about its nutritional content and associated health ramifications. To investigate this phenomenon, a cohort of 100 individuals aged between 25 and 45, employed in Sri Lanka's private sector, participated in this research. These participants provided socio-demographic data through a standardized questionnaire, enabling the characterization of their backgrounds. Additionally, participants disclosed their frequency of fast-food consumption and engagement in physical activities, utilizing validated assessment tools. The collected data was meticulously compiled into an Excel spreadsheet and subjected to rigorous statistical analysis. Descriptive statistics, such as percentages and proportions, were employed to delineate the body weight status of the participants. Employing chi-square tests, our study identified significant associations between fast-food consumption, levels of physical activity, and body weight categories. Furthermore, through binary logistic regression analysis, potential risk factors contributing to overweight and obesity within the young employee cohort were elucidated. Our findings revealed a disconcerting trend, with 6% of participants classified as underweight, 32% within the normal weight range, and a substantial 62% categorized as overweight or obese. These outcomes underscore the alarming prevalence of overweight and obesity among young private-sector employees, particularly within the bustling urban landscape of Colombo, Sri Lanka. The data strongly imply a robust correlation between fast-food consumption, sedentary behaviors, and higher body weight categories, reflective of the evolving lifestyle patterns associated with the nation's economic growth. This study emphasizes the urgent need for effective interventions to counter the detrimental effects of fast-food consumption. The implementation of awareness campaigns elucidating the adverse health consequences of fast food, coupled with comprehensive nutritional education, can empower individuals to make informed dietary choices. Workplace interventions, including the provision of healthier meal alternatives and the facilitation of physical activity opportunities, are essential in fostering a healthier workforce and mitigating the escalating burden of overweight and obesity in Sri LankaKeywords: fast food consumption, obese, overweight, physical activity level
Procedia PDF Downloads 50873 The Potential of Edaphic Algae for Bioremediation of the Diesel-Contaminated Soil
Authors: C. J. Tien, C. S. Chen, S. F. Huang, Z. X. Wang
Abstract:
Algae in soil ecosystems can produce organic matters and oxygen by photosynthesis. Heterocyst-forming cyanobacteria can fix nitrogen to increase soil nitrogen contents. Secretion of mucilage by some algae increases the soil water content and soil aggregation. These actions will improve soil quality and fertility, and further increase abundance and diversity of soil microorganisms. In addition, some mixotrophic and heterotrophic algae are able to degrade petroleum hydrocarbons. Therefore, the objectives of this study were to analyze the effects of algal addition on the degradation of total petroleum hydrocarbons (TPH), diversity and activity of bacteria and algae in the diesel-contaminated soil under different nutrient contents and frequency of plowing and irrigation in order to assess the potential bioremediation technique using edaphic algae. The known amount of diesel was added into the farmland soil. This diesel-contaminated soil was subject to five settings, experiment-1 with algal addition by plowing and irrigation every two weeks, experiment-2 with algal addition by plowing and irrigation every four weeks, experiment-3 with algal and nutrient addition by plowing and irrigation every two weeks, experiment-4 with algal and nutrient addition by plowing and irrigation every four weeks, and the control without algal addition. Soil samples were taken every two weeks to analyze TPH concentrations, diversity of bacteria and algae, and catabolic genes encoding functional degrading enzymes. The results show that the TPH removal rates of five settings after the two-month experimental period were in the order: experiment-2 > expermient-4 > experiment-3 > experiment-1 > control. It indicated that algal addition enhanced the degradation of TPH in the diesel-contaminated soil, but not for nutrient addition. Plowing and irrigation every four weeks resulted in more TPH removal than that every two weeks. The banding patterns of denaturing gradient gel electrophoresis (DGGE) revealed an increase in diversity of bacteria and algae after algal addition. Three petroleum hydrocarbon-degrading algae (Anabaena sp., Oscillatoria sp. and Nostoc sp.) and two added algal strains (Leptolyngbya sp. and Synechococcus sp.) were sequenced from DGGE prominent bands. The four hydrocarbon-degrading bacteria Gordonia sp., Mycobacterium sp., Rodococcus sp. and Alcanivorax sp. were abundant in the treated soils. These results suggested that growth of indigenous bacteria and algae were improved after adding edaphic algae. Real-time polymerase chain reaction results showed that relative amounts of four catabolic genes encoding catechol 2, 3-dioxygenase, toluene monooxygenase, xylene monooxygenase and phenol monooxygenase were appeared and expressed in the treated soil. The addition of algae increased the expression of these genes at the end of experiments to biodegrade petroleum hydrocarbons. This study demonstrated that edaphic algae were suitable biomaterials for bioremediating diesel-contaminated soils with plowing and irrigation every four weeks.Keywords: catabolic gene, diesel, diversity, edaphic algae
Procedia PDF Downloads 280872 Postharvest Losses and Handling Improvement of Organic Pak-Choi and Choy Sum
Authors: Pichaya Poonlarp, Danai Boonyakiat, C. Chuamuangphan, M. Chanta
Abstract:
Current consumers’ behavior trends have changed towards more health awareness, the well-being of society and interest of nature and environment. The Royal Project Foundation is, therefore, well aware of organic agriculture. The project only focused on using natural products and utilizing its highland biological merits to increase resistance to diseases and insects for the produce grown. The project also brought in basic knowledge from a variety of available research information, including, but not limited to, improvement of soil fertility and a control of plant insects with biological methods in order to lay a foundation in developing and promoting farmers to grow quality produce with a high health safety. This will finally lead to sustainability for future highland agriculture and a decrease of chemical use on the highland area which is a source of natural watershed. However, there are still shortcomings of the postharvest management in term of quality and losses, such as bruising, rottenness, wilting and yellowish leaves. These losses negatively affect the maintenance and a shelf life of organic vegetables. Therefore, it is important that a research study of the appropriate and effective postharvest management is conducted for an individual organic vegetable to minimize product loss and find root causes of postharvest losses which would contribute to future postharvest management best practices. This can be achieved through surveys and data collection from postharvest processes in order to conduct analysis for causes of postharvest losses of organic pak-choi, baby pak-choi, and choy sum. Consequently, postharvest losses reduction strategies of organic vegetables can be achieved. In this study, postharvest losses of organic pak choi, baby pak-choi, and choy sum were determined at each stage of the supply chain starting from the field after harvesting, at the Development Center packinghouse, at Chiang Mai packinghouse, at Bangkok packing house and at the Royal Project retail shop in Chiang Mai. The results showed that postharvest losses of organic pak-choi, baby pak-choi, and choy sum were 86.05, 89.05 and 59.03 percent, respectively. The main factors contributing to losses of organic vegetables were due to mechanical damage and underutilized parts and/or short of minimum quality standard. Good practices had been developed after causes of losses were identified. Appropriate postharvest handling and management, for example, temperature control, hygienic cleaning, and reducing the duration of the supply chain, postharvest losses of all organic vegetables should be able to remarkably reduced postharvest losses in the supply chain.Keywords: postharvest losses, organic vegetables, handling improvement, shelf life, supply chain
Procedia PDF Downloads 475871 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 434870 Dietary Anion-Cation Balance of Grass and Net Acid-Base Excretion in Urine of Suckler Cows
Authors: H. Scholz, P. Kuehne, G. Heckenberger
Abstract:
Dietary Anion-Cation Balance (DCAB) in grazing systems under German conditions has a tendency to decrease from May until September and often are measured DCAB lower than 100 meq per kg dry matter. Lower DCAB in grass feeding system can change the metabolic status of suckler cows and often are results in acidotic metabolism. Measurement of acid-base excretion in dairy cows has been proved to a method to evaluate the acid-base status. The hypothesis was that metabolic imbalances could be identified by urine measurement in suckler cows. The farm study was conducted during the grazing seasons 2017 and 2018 and involved 7 suckler cow farms in Germany. Suckler cows were grazing during the whole time of the investigation and had no access to other feeding components. Cows had free access to water and salt block and free access to minerals (loose). The dry matter of the grass was determined at 60 °C and were then analysed for energy and nutrient content and for the Dietary Cation-Anion Balance (DCAB). Urine was collected in 50 ml-glasses and analysed for net acid-base excretion (NSBA) and the concentration of creatinine and urea in the laboratory. Statistical analysis took place with ANOVA with fixed effects of farms (1-7), month (May until September), and number of lactations (1, 2, and ≥ 3 lactations) using SPSS Version 25.0 for windows. An alpha of 0.05 was used for all statistical tests. During the grazing periods of years 2017 and 2018, an average DCAB was observed in the grass of 167 meq per kg DM. A very high mean variation could be determined from -42 meq/kg to +439 meq/kg. Reference values in relation to DCAB were described between 150 meq and 400 meq per kg DM. It was found the high chlorine content with reduced potassium level led to this reduction in DCAB at the end of the grazing period. Between the DCAB of the grass and the NSBA in urine of suckler cows was a correlation according to PEARSON of r = 0.478 (p ≤ 0.001) or after SPEARMAN of r = 0.601 (p ≤ 0.001) observed. For the control of urine values of grazing suckler cows, the wide spread of the values poses a challenge of the interpretation, especially since the DCAB is unknown. The influence of several feeding components such as chlorine, sulfur, potassium, and sodium (ions for the DCAB) and dry matter feed intake during the grazing period of suckler cows should be taken into account in further research. The results obtained show that up a decrease in the DCAB is related to a decrease in NSBA in urine of suckler cows. Monitoring of metabolic disturbances should include analysis of urine, blood, milk, and ruminal fluid.Keywords: dietary anion-cation balance, DCAB, net acid-base excretion, NSBA, suckler cow, grazing period
Procedia PDF Downloads 151869 Genetic Structure Analysis through Pedigree Information in a Closed Herd of the New Zealand White Rabbits
Authors: M. Sakthivel, A. Devaki, D. Balasubramanyam, P. Kumarasamy, A. Raja, R. Anilkumar, H. Gopi
Abstract:
The New Zealand White breed of rabbit is one of the most commonly used, well adapted exotic breeds in India. Earlier studies were limited only to analyze the environmental factors affecting the growth and reproductive performance. In the present study, the population of the New Zealand White rabbits in a closed herd was evaluated for its genetic structure. Data on pedigree information (n=2508) for 18 years (1995-2012) were utilized for the study. Pedigree analysis and the estimates of population genetic parameters based on gene origin probabilities were performed using the software program ENDOG (version 4.8). The analysis revealed that the mean values of generation interval, coefficients of inbreeding and equivalent inbreeding were 1.489 years, 13.233 percent and 17.585 percent, respectively. The proportion of population inbred was 100 percent. The estimated mean values of average relatedness and the individual increase in inbreeding were 22.727 and 3.004 percent, respectively. The percent increase in inbreeding over generations was 1.94, 3.06 and 3.98 estimated through maximum generations, equivalent generations, and complete generations, respectively. The number of ancestors contributing the most of 50% genes (fₐ₅₀) to the gene pool of reference population was 4 which might have led to the reduction in genetic variability and increased amount of inbreeding. The extent of genetic bottleneck assessed by calculating the effective number of founders (fₑ) and the effective number of ancestors (fₐ), as expressed by the fₑ/fₐ ratio was 1.1 which is indicative of the absence of stringent bottlenecks. Up to 5th generation, 71.29 percent pedigree was complete reflecting the well-maintained pedigree records. The maximum known generations were 15 with an average of 7.9 and the average equivalent generations traced were 5.6 indicating of a fairly good depth in pedigree. The realized effective population size was 14.93 which is very critical, and with the increasing trend of inbreeding, the situation has been assessed to be worse in future. The proportion of animals with the genetic conservation index (GCI) greater than 9 was 39.10 percent which can be used as a scale to use such animals with higher GCI to maintain balanced contribution from the founders. From the study, it was evident that the herd was completely inbred with very high inbreeding coefficient and the effective population size was critical. Recommendations were made to reduce the probability of deleterious effects of inbreeding and to improve the genetic variability in the herd. The present study can help in carrying out similar studies to meet the demand for animal protein in developing countries.Keywords: effective population size, genetic structure, pedigree analysis, rabbit genetics
Procedia PDF Downloads 293868 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants
Authors: Zarina Chokparova, Ighor Uzhinsky
Abstract:
Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture
Procedia PDF Downloads 141