Search results for: homeownership rates
2378 A Qualitative Exploration of How Brazilian Immigrant Mothers Living in the United States Obtain Information about Physical Activity and Screen-Viewing for Their Young Children
Authors: Ana Cristina Lindsay, Mary L. Greaney
Abstract:
Background: Racial/ethnic minority children of low-income immigrant families remain at increased risk of obesity. Consistent with high rates of childhood obesity among racial/ethnic minority children are high rates of physical inactivity and increased levels of sedentary behaviors (e.g., TV and other screen viewing). Brazilians comprise a fast-growing immigrant population group in the US, yet little research has focused on the health issues affecting Brazilian immigrant children. The purpose of this qualitative study was to explore how Brazilian-born immigrant mothers living in the United States obtain information about physical activity and screen-time for their young children. Methods: Qualitative research including focus groups with Brazilian immigrant mothers of preschool-age children living in the U.S. Results: Results revealed that Brazilian immigrant mothers obtain information on young children’s physical activity and screen-time from a variety of sources including interpersonal communication, television and magazines, government health care programs (WIC program) and professionals (e.g., nurses and pediatricians). A noteworthy finding is the significant role of foreign information sources (Brazilian TV shows and magazines) on mothers’ access to information about these early behaviors. Future research is needed to quantify and better understanding Brazilian parents’ access to accurate and sound information related to young children’s physical activity and screen-viewing behaviors. Conclusions: To our knowledge, no existing research has examined how Brazilian immigrant mothers living in the United States obtain information about these behaviors. This information is crucial for the design of culturally appropriate early childhood obesity prevention interventions tailored to the specific needs of this ethnic group.Keywords: physical activity, scree-time, information, immigrant, mothers, Brazilian, United States
Procedia PDF Downloads 2752377 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 3842376 Symbiotic Functioning, Photosynthetic Induction and Characterisation of Rhizobia Associated with Groundnut, Jack Bean and Soybean from Eswatini
Authors: Zanele D. Ngwenya, Mustapha Mohammed, Felix D. Dakora
Abstract:
Legumes are a major source of biological nitrogen, and therefore play a crucial role in maintaining soil productivity in smallholder agriculture in southern Africa. Through their ability to fix atmospheric nitrogen in root nodules, legumes are a better option for sustainable nitrogen supply in cropping systems than chemical fertilisers. For decades, farmers have been highly receptive to the use of rhizobial inoculants as a source of nitrogen due mainly to the availability of elite rhizobial strains at a much lower compared to chemical fertilisers. To improve the efficiency of the legume-rhizobia symbiosis in African soils would require the use of highly effective rhizobia capable of nodulating a wide range of host plants. This study assessed the morphogenetic diversity, photosynthetic functioning and relative symbiotic effectiveness (RSE) of groundnut, jack bean and soybean microsymbionts in Eswatini soils as a first step to identifying superior isolates for inoculant production. According to the manufacturer's instructions, rhizobial isolates were cultured in yeast-mannitol (YM) broth until the late log phase and the bacterial genomic DNA was extracted using GenElute bacterial genomic DNA kit. The extracted DNA was subjected to enterobacterial repetitive intergenic consensus-PCR (ERIC-PCR) and a dendrogram constructed from the band patterns to assess rhizobial diversity. To assess the N2-fixing efficiency of the authenticated rhizobia, photosynthetic rates (A), stomatal conductance (gs), and transpiration rates (E) were measured at flowering for plants inoculated with the test isolates. The plants were then harvested for nodulation assessment and measurement of plant growth as shoot biomass. The results of ERIC-PCR fingerprinting revealed the presence of high genetic diversity among the microsymbionts nodulating each of the three test legumes, with many of them showing less than 70% ERIC-PCR relatedness. The dendrogram generated from ERIC-PCR profiles grouped the groundnut isolates into 5 major clusters, while the jack bean and soybean isolates were grouped into 6 and 7 major clusters, respectively. Furthermore, the isolates also elicited variable nodule number per plant, nodule dry matter, shoot biomass and photosynthetic rates in their respective host plants under glasshouse conditions. Of the groundnut isolates tested, 38% recorded high relative symbiotic effectiveness (RSE >80), while 55% of the jack bean isolates and 93% of the soybean isolates recorded high RSE (>80) compared to the commercial Bradyrhizobium strains. About 13%, 27% and 83% of the top N₂-fixing groundnut, jack bean and soybean isolates, respectively, elicited much higher relative symbiotic efficiency (RSE) than the commercial strain, suggesting their potential for use in inoculant production after field testing. There was a tendency for both low and high N₂-fixing isolates to group together in the dendrogram from ERIC-PCR profiles, which suggests that RSE can differ significantly among closely related microsymbionts.Keywords: genetic diversity, relative symbiotic effectiveness, inoculant, N₂-fixing
Procedia PDF Downloads 2212375 How Much the Role of Fertilizers Management and Wheat Planting Methods on Its Yield Improvement?
Authors: Ebrahim Izadi-Darbandi, Masoud Azad, Masumeh Dehghan
Abstract:
In order to study the effects of nitrogen and phosphoruse management and wheat sowing method on wheat yield, two experiments was performed as factorial, based on completely randomized design with three replications at Research Farm, Faculty of Agriculture, Ferdowsi University of Mashhad, Iran in 2009. In the first experiment nitrogen application rates (100kg ha-1, 200 kg ha-1, 300 kg ha-1), phosphorus application rates (100 kg ha-1, 200 kg ha-1) and two levels of their application methods (Broadcast and Band) were studied. The second experiment treatments included of wheat sowing methods (single-row with 30 cm distance and twine row on 60 cm width ridges), as main plots and nitrogen and phosphorus application methods (Broadcast and Band) as sub plots (150 kg ha-1). Phosphorus and nitrogen sources for fertilization at both experiment were respectively super phosphate, applied before wheat sowing and incorporated with soil and urea, applied in two phases (50% pre plant) and (50%) near wheat shooting. Results from first experiment showed that the effect of fertilizers application methods were significant (p≤0.01) on wheat yield increasing. Band application of phosphorus and nitrogen were increased biomass and seed yield of wheat with nine and 15% respectively compared to their broadcast application. The interaction between the effects of nitrogen and phosphorus application rate with phosphorus and nitrogen application methods, showed that band application of fertilizers and the rate of application of 200kg/ha phosphorus and 300kg/ha nitrogen were the best methods in wheat yield improvement. The second experiment also showed that the effect of wheat sowing method and fertilizers application methods were significant (p≤0.01) on wheat seed and biomass yield improvement. Wheat twine row on 60 cm width ridges sowing method, increased its biomass and seed yield for 22% and 30% respectively compared to single-row with 30 cm. Wheat sowing method and fertilizers application methods interaction indicated that band application of fertilizers and wheat twine row on 60 cm width ridges sowing method was the best treatment on winter wheat yield improvement. In conclusion these results indicated that nitrogen and phosphorus management in wheat and modifying wheat sowing method have important role in increasing fertilizers use efficiency.Keywords: band application, broadcast application, rate of fertilizer application, wheat seed yield, wheat biomass yield
Procedia PDF Downloads 4642374 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control
Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol
Abstract:
Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics
Procedia PDF Downloads 2082373 “CheckPrivate”: Artificial Intelligence Powered Mobile Application to Enhance the Well-Being of Sextual Transmitted Diseases Patients in Sri Lanka under Cultural Barriers
Authors: Warnakulasuriya Arachichige Malisha Ann Rosary Fernando, Udalamatta Gamage Omila Chalanka Jinadasa, Bihini Pabasara Amandi Amarasinghe, Manul Thisuraka Mandalawatta, Uthpala Samarakoon, Manori Gamage
Abstract:
The surge in sexually transmitted diseases (STDs) has become a critical public health crisis demanding urgent attention and action. Like many other nations, Sri Lanka is grappling with a significant increase in STDs due to a lack of education and awareness regarding their dangers. Presently, the available applications for tracking and managing STDs cover only a limited number of easily detectable infections, resulting in a significant gap in effectively controlling their spread. To address this gap and combat the rising STD rates, it is essential to leverage technology and data. Employing technology to enhance the tracking and management of STDs is vital to prevent their further propagation and to enable early intervention and treatment. This requires adopting a comprehensive approach that involves raising public awareness about the perils of STDs, improving access to affordable healthcare services for early detection and treatment, and utilizing advanced technology and data analysis. The proposed mobile application aims to cater to a broad range of users, including STD patients, recovered individuals, and those unaware of their STD status. By harnessing cutting-edge technologies like image detection, symptom-based identification, prevention methods, doctor and clinic recommendations, and virtual counselor chat, the application offers a holistic approach to STD management. In conclusion, the escalating STD rates in Sri Lanka and across the globe require immediate action. The integration of technology-driven solutions, along with comprehensive education and healthcare accessibility, is the key to curbing the spread of STDs and promoting better overall public health.Keywords: STD, machine learning, NLP, artificial intelligence
Procedia PDF Downloads 812372 Heterogeneous Catalytic Ozonation of Diethyl Phthalate
Authors: Chedly Tizaoui, Hussain Mohammed, Lobna Mansouri, Nidal Hilal, Latifa Bousselmi
Abstract:
The degradation of diethyl phthalate (DEP) was studied using heterogeneous catalytic ozonation. Activated carbon was used as a catalyst. The degradation of DEP with ozone alone was slow while catalytic ozonation increased degradation rates. Second-order reaction kinetics was used to describe the experimental data, and the corresponding rate constant values were 1.19 and 3.94 M-1.s-1 for ozone and ozone/activated carbon respectively.Keywords: ozone, heterogeneous catalytic ozonation, diethyl phthalate, endocrine disrupting chemicals
Procedia PDF Downloads 3472371 Numerical Optimization of Cooling System Parameters for Multilayer Lithium Ion Cell and Battery Packs
Authors: Mohammad Alipour, Ekin Esen, Riza Kizilel
Abstract:
Lithium-ion batteries are a commonly used type of rechargeable batteries because of their high specific energy and specific power. With the growing popularity of electric vehicles and hybrid electric vehicles, increasing attentions have been paid to rechargeable Lithium-ion batteries. However, safety problems, high cost and poor performance in low ambient temperatures and high current rates, are big obstacles for commercial utilization of these batteries. By proper thermal management, most of the mentioned limitations could be eliminated. Temperature profile of the Li-ion cells has a significant role in the performance, safety, and cycle life of the battery. That is why little temperature gradient can lead to great loss in the performances of the battery packs. In recent years, numerous researchers are working on new techniques to imply a better thermal management on Li-ion batteries. Keeping the battery cells within an optimum range is the main objective of battery thermal management. Commercial Li-ion cells are composed of several electrochemical layers each consisting negative-current collector, negative electrode, separator, positive electrode, and positive current collector. However, many researchers have adopted a single-layer cell to save in computing time. Their hypothesis is that thermal conductivity of the layer elements is so high and heat transfer rate is so fast. Therefore, instead of several thin layers, they model the cell as one thick layer unit. In previous work, we showed that single-layer model is insufficient to simulate the thermal behavior and temperature nonuniformity of the high-capacity Li-ion cells. We also studied the effects of the number of layers on thermal behavior of the Li-ion batteries. In this work, first thermal and electrochemical behavior of the LiFePO₄ battery is modeled with 3D multilayer cell. The model is validated with the experimental measurements at different current rates and ambient temperatures. Real time heat generation rate is also studied at different discharge rates. Results showed non-uniform temperature distribution along the cell which requires thermal management system. Therefore, aluminum plates with mini-channel system were designed to control the temperature uniformity. Design parameters such as channel number and widths, inlet flow rate, and cooling fluids are optimized. As cooling fluids, water and air are compared. Pressure drop and velocity profiles inside the channels are illustrated. Both surface and internal temperature profiles of single cell and battery packs are investigated with and without cooling systems. Our results show that using optimized Mini-channel cooling plates effectively controls the temperature rise and uniformity of the single cells and battery packs. With increasing the inlet flow rate, cooling efficiency could be reached up to 60%.Keywords: lithium ion battery, 3D multilayer model, mini-channel cooling plates, thermal management
Procedia PDF Downloads 1642370 Characterising Rates of Renal Dysfunction and Sarcoidosis in Patients with Elevated Serum Angiotensin-Converting Enzyme
Authors: Fergal Fouhy, Alan O’Keeffe, Sean Costelloe, Michael Clarkson
Abstract:
Background: Sarcoidosis is a systemic, non-infectious disease of unknown aetiology, characterized by non-caseating granulomatous inflammation. The lung is most often affected (90%); however, the condition can affect all organs, including the kidneys. There is limited evidence describing the incidence and characteristics of renal involvement in sarcoidosis. Serum angiotensin-converting enzyme (ACE) is a recognised biomarker used in the diagnosis and monitoring of sarcoidosis. Methods: A single-centre, retrospective cohort study of patients presenting to Cork University Hospital (CUH) in 2015 with first-time elevations of serum ACE was performed. This included an initial database review of ACE and other biochemistry results, followed by a medical chart review to confirm the presence or absence of sarcoidosis and management thereof. Acute kidney injury (AKI) was staged using the AKIN criteria, and chronic kidney disease (CKD) was staged using the KDIGO criteria. Follow-up was assessed over five years tracking serum creatinine, serum calcium, and estimated glomerular filtration rates (eGFR). Results: 119 patients were identified as having a first raised serum ACE in 2015. Seventy-nine male patients and forty female patients were identified. The mean age of patients identified was 47 years old. 11% had CKD at baseline. 18% developed an AKI at least once within the next five years. A further 6% developed CKD during this time period. 13% developed hypercalcemia. The patients within the lowest quartile of serums ACE had an incidence of sarcoidosis of 5%. None of this group developed hypercalcemia, 23% developed AKI, and 7% developed CKD. Of the patients with a serum ACE in the highest quartile, almost all had documented diagnoses of sarcoidosis with an incidence of 96%. 3% of this group developed hypercalcemia, 13% AKI and 3% developed CKD. Conclusions: There was an unexpectedly high incidence of AKI in patients who had a raised serum ACE. Not all patients with a raised serum ACE had a confirmed diagnosis of sarcoidosis. There does not appear to be a relationship between increased serum ACE levels and increased incidence of hypercalcaemia, AKI, and CKD. Ideally, all patients should have biopsy-proven sarcoidosis. This is an initial study that should be replicated with larger numbers and including multiple centres.Keywords: sarcoidosis, acute kidney injury, chronic kidney disease, hypercalcemia
Procedia PDF Downloads 1032369 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 1782368 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 1352367 Typification and Determination of Antibiotic Susceptibility Profiles with E Test Methods of Anaerobic Gram Negative Bacilli Isolated from Various Clinical Specimen
Authors: Cengiz Demir, Recep Keşli, Gülşah Aşık
Abstract:
Objective: This study was carried out with the purpose of defining by using the E test method and determining the antibiotic resistance profiles of Gram-negative anaerobic bacilli isolated from various clinical specimens obtained from patients with suspected anaerobic infections and referred to Medical Microbiology Laboratory of Afyon Kocatepe University, ANS Application and Research Hospital. Methods: Two hundred and seventy eight clinical specimens were examined for isolation of the anaerobic bacteria in Medical Microbiology Laboratory between the 1st November 2014 and 30th October 2015. Specimens were cultivated by using Scheadler agar that 5% defibrinated sheep blood added, and Scheadler broth. The isolated anaerobic Gram-negative bacilli were identified conventional methods and Vitek 2 (ANC ID Card, bioMerieux, France) cards. Antibiotic resistance rates against to penicillin G, clindamycin, cefoxitin, metronidazole, moxifloxacin, imipenem, meropenem, ertapenem and doripenem were determined with E-test method for each isolate. Results: Of the isolated twenty-eight anaerobic gram negative bacilli fourteen were identified as the B. fragilis group, 9 were Prevotella group, and 5 were Fusobacterium group. The highest resistance rate was found against penicillin (78.5%) and resistance rates against clindamycin and cefoxitin were found as 17.8% and 21.4%, respectively. Against to the; metronidazole, moxifloxacin, imipenem, meropenem, ertapenem and doripenem, no resistance was found. Conclusion: Since high rate resistance has been detected against to penicillin in the study penicillin should not be preferred in empirical treatment. Cefoxitin can be preferred in empirical treatment; however, carrying out the antibiotic sensitivity testing will be more proper and beneficial. No resistance was observed against carbapenem group antibiotics and metronidazole; so that reason, these antibiotics should be reserved for treatment of infectious caused by resistant strains in the future.Keywords: anaerobic gram-negative bacilli, anaerobe, antibiotics and resistance profiles, e-test method
Procedia PDF Downloads 3052366 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors
Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson
Abstract:
The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.Keywords: cooking, indoor air quality, low-cost sensor, ventilation
Procedia PDF Downloads 1132365 A Meta-Analysis of the Academic Achievement of Students With Emotional/Behavioral Disorders in Traditional Public Schools in the United States
Authors: Dana Page, Erica McClure, Kate Snider, Jenni Pollard, Tim Landrum, Jeff Valentine
Abstract:
Extensive research has been conducted on students with emotional and behavioral disorders (EBD) and their rates of challenging behavior. In the past, however, less attention has been given to their academic achievement and outcomes. Recent research examining outcomes for students with EBD has indicated that these students receive lower grades, are less likely to pass classes, and experience higher rates of school dropout than students without disabilities and students with other high incidence disabilities. Given that between 2% and 20% of the school-age population is likely to have EBD (though many may not be identified as such), this is no small problem. Despite the need for increased examination of this population’s academic achievement, research on the actual performance of students with EBD has been minimal. This study reports the results of a meta-analysis of the limited research examining academic achievement of students with EBD, including effect sizes of assessment scores and discussion of moderators potentially impacting academic outcomes. Researchers conducted a thorough literature search to identify potentially relevant documents before screening studies for inclusion in the systematic review. Screening identified 35 studies that reported results of academic assessment scores for students with EBD. These studies were then coded to extract descriptive data across multiple domains, including placement of students, participant demographics, and academic assessment scores. Results indicated possible collinearity between EBD disability status and lower academic assessment scores, despite a lack of association between EBD eligibility and lower cognitive ability. Quantitative analysis of assessment results yielded effect sizes for academic achievement of student participants, indicating lower performance levels and potential moderators (e.g., race, socioeconomic status, and gender) impacting student academic performance. In addition to discussing results of the meta-analysis, implications and areas for future research, policy, and practice are discussed.Keywords: students with emotional behavioral disorders, academic achievement, systematic review, meta-analysis
Procedia PDF Downloads 692364 Effects of Hydraulic Loading Rates and Porous Matrix in Constructed Wetlands for Wastewater Treatment
Authors: Li-Jun Ren, Wei Pan, Li-Li Xu, Shu-Qing An
Abstract:
This study evaluated whether different matrix composition volume ratio can improve water quality in the experiment. The mechanism and adsorption capability of wetland matrixes (oyster shell, coarse slag, and volcanic rock) and their different volume ratio in group configuration during pollutants removal processes were tested. When conditions unchanged, the residence time affects the reaction effect. The average removal efficiencies of four kinds of matrix volume ratio on the TN were 62.76%, 61.54%, 64.13%, and 55.89%, respectively.Keywords: hydraulic residence time, matrix composition, removal efficiency, volume ratio
Procedia PDF Downloads 3292363 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?
Authors: Ruth Hegarty, Noel Connaughton
Abstract:
Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency
Procedia PDF Downloads 3062362 Ergonomic Adaptations in Visually Impaired Workers - A Literature Review
Authors: Kamila Troper, Pedro Mestre, Maria Lurdes Menano, Joana Mendonça, Maria João Costa, Sandra Demel
Abstract:
Introduction: Visual impairment is a problem that has an influence on hundreds of thousands of people all over the world. Although it is possible for a Visually Impaired person to do most jobs, the right training, technological assistance, and emotional support are essential. Ergonomics be able to solve many of the problems/issues with the relative ease of positioning, lighting and design of the workplace. A little forethought can make a tremendous difference to the ease with which a person with an impairment function. Objectives: Review the main ergonomic adaptation measures reported in the literature in order to promote better working conditions and safety measures for the visually impaired. Methodology: This was an exploratory-descriptive, qualitative literature systematic review study. The main databases used were: PubMed, BIREME, LILACS, with articles and studies published between 2000 and 2021. Results: Based on the principles of the theoretical references of ergonomic analysis of work, the main restructuring of the physical space of the workstations were: Accessibility facilities and assistive technologies; A screen reader that captures information from a computer and sends it in real-time to a speech synthesizer or Braille terminal; Installations of software with voice recognition, Monitors with enlarged screens; Magnification software; Adequate lighting, magnifying lenses in addition to recommendations regarding signage and clearance of the places where the visually impaired pass through. Conclusions: Employability rates for people with visual impairments(both those who are blind and those who have low vision)are low and continue to be a concern to the world and for researchers as a topic of international interest. Although numerous authors have identified barriers to employment and proposed strategies to remediate or circumvent those barriers, people with visual impairments continue to experience high rates of unemployment.Keywords: ergonomic adaptations, visual impairments, ergonomic analysis of work, systematic review
Procedia PDF Downloads 1822361 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies
Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou
Abstract:
Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species-level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit-motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.Keywords: COI gene, food fraud, labelling control, molecular identification
Procedia PDF Downloads 672360 Risk-Sharing Financing of Islamic Banks: Better Shielded against Interest Rate Risk
Authors: Mirzet SeHo, Alaa Alaabed, Mansur Masih
Abstract:
In theory, risk-sharing-based financing (RSF) is considered a corner stone of Islamic finance. It is argued to render Islamic banks more resilient to shocks. In practice, however, this feature of Islamic financial products is almost negligible. Instead, debt-based instruments, with conventional like features, have overwhelmed the nascent industry. In addition, the framework of present-day economic, regulatory and financial reality inevitably exposes Islamic banks in dual banking systems to problems of conventional banks. This includes, but is not limited to, interest rate risk. Empirical evidence has, thus far, confirmed such exposures, despite Islamic banks’ interest-free operations. This study applies system GMM in modeling the determinants of RSF, and finds that RSF is insensitive to changes in interest rates. Hence, our results provide support to the “stability” view of risk-sharing-based financing. This suggests RSF as the way forward for risk management at Islamic banks, in the absence of widely acceptable Shariah compliant hedging instruments. Further support to the stability view is given by evidence of counter-cyclicality. Unlike debt-based lending that inflates artificial asset bubbles through credit expansion during the upswing of business cycles, RSF is negatively related to GDP growth. Our results also imply a significantly strong relationship between risk-sharing deposits and RSF. However, the pass-through of these deposits to RSF is economically low. Only about 40% of risk-sharing deposits are channeled to risk-sharing financing. This raises questions on the validity of the industry’s claim that depositors accustomed to conventional banking shun away from risk sharing and signals potential for better balance sheet management at Islamic banks. Overall, our findings suggest that, on the one hand, Islamic banks can gain ‘independence’ from conventional banks and interest rates through risk-sharing products, the potential for which is enormous. On the other hand, RSF could enable policy makers to improve systemic stability and restrain excessive credit expansion through its countercyclical features.Keywords: Islamic banks, risk-sharing, financing, interest rate, dynamic system GMM
Procedia PDF Downloads 3162359 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures
Authors: Nicky Wilson, Graeme Ralph
Abstract:
Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories
Procedia PDF Downloads 782358 Study on Control Techniques for Adaptive Impact Mitigation
Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty
Abstract:
Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber
Procedia PDF Downloads 902357 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis
Authors: Ahmadi, Iriansyah, Raihana Yahman
Abstract:
The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate
Procedia PDF Downloads 162356 Epidemiology of Congenital Heart Defects in Kazakhstan: Data from Unified National Electronic Healthcare System 2014-2020
Authors: Dmitriy Syssoyev, Aslan Seitkamzin, Natalya Lim, Kamilla Mussina, Abduzhappar Gaipov, Dimitri Poddighe, Dinara Galiyeva
Abstract:
Background: Data on the epidemiology of congenital heart defects (CHD) in Kazakhstan is scarce. Therefore, the aim of this study was to describe the incidence, prevalence and all-cause mortality of patients with CHD in Kazakhstan, using national large-scale registry data from the Unified National Electronic Healthcare System (UNEHS) for the period of 2014-2020. Methods: In this retrospective cohort study, the included data pertained to all patients diagnosed with CHD in Kazakhstan and registered in UNEHS between January 2014 and December 2020. CHD was defined based on International Classification of Diseases 10th Revision (ICD-10) codes Q20-Q26. Incidence, prevalence, and all-cause mortality rates were calculated per 100,000 population. Survival analysis was performed using Cox proportional hazards regression modeling and the Kaplan-Meier method. Results: In total, 66,512 patients were identified. Among them, 59,534 (89.5%) were diagnosed with a single CHD, while 6,978 (10.5%) had more than two CHDs. The median age at diagnosis was 0.08 years (interquartile range (IQR) 0.01 – 0.66) for people with multiple CHD types and 0.39 years (IQR 0.04 – 8.38) for those with a single CHD type. The most common CHD types were atrial septal defect (ASD) and ventricular septal defect (VSD), accounting for 25.8% and 21.2% of single CHD cases, respectively. The most common multiple types of CHD were ASD with VSD (23.4%), ASD with patent ductus arteriosus (PDA) (19.5%), and VSD with PDA (17.7%). The incidence rate of CHD decreased from 64.6 to 47.1 cases per 100,000 population among men and from 68.7 to 42.4 among women. The prevalence rose from 66.1 to 334.1 cases per 100,000 population among men and from 70.8 to 328.7 among women. Mortality rates showed a slight increase from 3.5 to 4.7 deaths per 100,000 in men and from 2.9 to 3.7 in women. Median follow-up was 5.21 years (IQR 2.47 – 11.69). Male sex (HR 1.60, 95% CI 1.45 - 1.77), having multiple CHDs (HR 2.45, 95% CI 2.01 - 2.97), and living in a rural area (HR 1.32, 95% CI 1.19 - 1.47) were associated with a higher risk of all-cause mortality. Conclusion: The incidence of CHD in Kazakhstan has shown a moderate decrease between 2014 and 2020, while prevalence and mortality have increased. Male sex, multiple CHD types, and rural residence were significantly associated with a higher risk of all-cause mortality.Keywords: congenital heart defects (CHD), epidemiology, incidence, Kazakhstan, mortality, prevalence
Procedia PDF Downloads 942355 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 1682354 Ultrasound-Mediated Separation of Ethanol, Methanol, and Butanol from Their Aqueous Solutions
Authors: Ozan Kahraman, Hao Feng
Abstract:
Ultrasonic atomization (UA) is a useful technique for producing a liquid spray for various processes, such as spray drying. Ultrasound generates small droplets (a few microns in diameter) by disintegration of the liquid via cavitation and/or capillary waves, with low range velocity and narrow droplet size distribution. In recent years, UA has been investigated as an alternative for enabling or enhancing ultrasound-mediated unit operations, such as evaporation, separation, and purification. The previous studies on the UA separation of a solvent from a bulk solution were limited to ethanol-water systems. More investigations into ultrasound-mediated separation for other liquid systems are needed to elucidate the separation mechanism. This study was undertaken to investigate the effects of the operational parameters on the ultrasound-mediated separation of three miscible liquid pairs: ethanol-, methanol-, and butanol-water. A 2.4 MHz ultrasonic mister with a diameter of 18 mm and rating power of 24 W was installed on the bottom of a custom-designed cylindrical separation unit. Air was supplied to the unit (3 to 4 L/min.) as a carrier gas to collect the mist. The effects of the initial alcohol concentration, viscosity, and temperature (10, 30 and 50°C) on the atomization rates were evaluated. The alcohol concentration in the collected mist was measured with high performance liquid chromatography and a refractometer. The viscosity of the solutions was determined using a Brookfield digital viscometer. The alcohol concentration of the atomized mist was dependent on the feed concentration, feed rate, viscosity, and temperature. Increasing the temperature of the alcohol-water mixtures from 10 to 50°C increased the vapor pressure of both the alcohols and water, resulting in an increase in the atomization rates but a decrease in the separation efficiency. The alcohol concentration in the mist was higher than that of the alcohol-water equilibrium at all three temperatures. More importantly, for ethanol, the ethanol concentration in the mist went beyond the azeotropic point, which cannot be achieved by conventional distillation. Ultrasound-mediated separation is a promising non-equilibrium method for separating and purifying alcohols, which may result in significant energy reductions and process intensification.Keywords: azeotropic mixtures, distillation, evaporation, purification, seperation, ultrasonic atomization
Procedia PDF Downloads 1802353 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance
Authors: Soheila Sadeghi
Abstract:
Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.Keywords: contract variation impact, quantitative analysis, project performance, claims analysis
Procedia PDF Downloads 402352 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products
Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie
Abstract:
Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels
Procedia PDF Downloads 1792351 Parametric Inference of Elliptical and Archimedean Family of Copulas
Authors: Alam Ali, Ashok Kumar Pathak
Abstract:
Nowadays, copulas have attracted significant attention for modeling multivariate observations, and the foremost feature of copula functions is that they give us the liberty to study the univariate marginal distributions and their joint behavior separately. The copula parameter apprehends the intrinsic dependence among the marginal variables, and it can be estimated using parametric, semiparametric, or nonparametric techniques. This work aims to compare the coverage rates between an Elliptical and an Archimedean family of copulas via a fully parametric estimation technique.Keywords: elliptical copula, archimedean copula, estimation, coverage rate
Procedia PDF Downloads 642350 Indigenous Pre-Service Teacher Education: Developing, Facilitating, and Maintaining Opportunities for Retention and Graduation
Authors: Karen Trimmer, Raelene Ward, Linda Wondunna-Foley
Abstract:
Within Australian tertiary institutions, the subject of Aboriginal and Torres Strait Islander education has been a major concern for many years. Aboriginal and Torres Strait Islander teachers are significantly under-represented in Australian schools and universities. High attrition rates in teacher education and in the teaching industry have contributed to a minimal growth rate in the numbers of Aboriginal and Torres Strait Islander teachers in previous years. There was an increase of 500 Indigenous teachers between 2001 and 2008 but these numbers still only account for one percent of teaching staff in government schools who identified as Aboriginal and Torres Strait Islander Australians (Ministerial Council for Education, Early Childhood Development and Youth Affairs 2010). Aboriginal and Torres Strait Islander teachers are paramount in fostering student engagement and improving educational outcomes for Indigenous students. Increasing the numbers of Aboriginal and Torres Strait Islander teachers is also a key factor in enabling all students to develop understanding of and respect for Aboriginal and Torres Strait Islander histories, cultures, and language. An ambitious reform agenda to improve the recruitment and retention of Aboriginal and Torres Strait Islander teachers will be effective only through national collaborative action and co-investment by schools and school authorities, university schools of education, professional associations, and Indigenous leaders and community networks. Whilst the University of Southern Queensland currently attracts Indigenous students to its teacher education programs (61 students in 2013 with an average of 48 enrollments each year since 2010) there is significant attrition during pre-service training. The annual rate of exiting before graduation remains high at 22% in 2012 and was 39% for the previous two years. These participation and retention rates are consistent with other universities across Australia. Whilst aspirations for a growing number of Indigenous people to be trained as teachers is present, there is a significant loss of students during their pre-service training and within the first five years of employment as a teacher. These trends also reflect the situation where Aboriginal and Torres Strait Islander teachers are significantly under-represented, making up less than 1% of teachers in schools across Australia. Through a project conducted as part the nationally funded More Aboriginal and Torres Strait Islander Teachers Initiative (MATSITI) we aim to gain an insight into the reasons that impact Aboriginal and Torres Strait Islander student’s decisions to exit their program. Through the conduct of focus groups and interviews with two graduating cohorts of self-identified Aboriginal and Torres Strait Islander students, rich data has been gathered to gain an understanding of the barriers and enhancers to the completion of pre-service qualification and transition to teaching. Having a greater understanding of these reasons then allows the development of collaborative processes and procedures to increase retention and completion rates of new Indigenous teachers. Analysis of factors impacting on exit decisions and transitions has provided evidence to support change of practice, redesign and enhancement of relevant courses and development of policy/procedures to address identified issues.Keywords: graduation, indigenous, pre-service teacher education, retention
Procedia PDF Downloads 4682349 Performance of Different Spray Nozzles in the Application of Defoliant on Cotton Plants (Gossypium hirsutum L.)
Authors: Mohamud Ali Ibrahim, Ali Bayat, Ali Bolat
Abstract:
Defoliant spraying is an important link in the mechanized cotton harvest because adequate and uniform spraying can improve defoliation quality and reduce cotton trash content. In defoliant application, application volume and spraying technology are extremely important. In this study, the effectiveness of defoliant application to cotton plant that has come to harvest with two different application volumes and three different types of nozzles with a standard field crop sprayer was determined. Experiments were carried in two phases as field area trials and laboratory analysis. Application rates were 250 l/ha and 400 L/ha, and spraying nozzles were (1) Standard flat fan nozzle (TP8006), (2) Air induction nozzle (AI 11002-VS), and (3) Dual Pattern nozzle (AI307003VP). A tracer (BSF) and defoliant were applied to mature cotton with approximately 60% open bolls and samplings for BSF deposition and spray coverage on the cotton plant were done at two plant height (upper layer, lower layer) of plant. Before and after spraying, bolls open and leaves rate on cotton plants were calculated, and filter papers were used to detect BSF deposition, and water sensitive papers (WSP) were used to measure the coverage rate of spraying methods used. Spectrofluorophotometer was used to detect the amount of tracer deposition on targets, and an image process computer programme was used to measure coverage rate on WSP. In analysis, conclusions showed that air induction nozzle (AI 11002-VS) achieved better results than the dual pattern and standard flat fan nozzles in terms of higher depositions, coverages, and leaf defoliations, and boll opening rates. AI nozzles operating at 250 L/ha application rate provide the highest deposition and coverage rate on applications of the defoliant; in addition, BSF as an indicator of the defoliant used reached on leaf beneath in merely this spray nozzle. After defoliation boll opening rate was 85% on the 7th and 12th days after spraying and falling rate of leaves was 76% at application rate of 250 L/ha with air induction (AI1102) nozzle.Keywords: cotton defoliant, air induction nozzle, dual pattern nozzle, standard flat fan nozzle, coverage rate, spray deposition, boll opening rate, leaves falling rate
Procedia PDF Downloads 196