Search results for: single carbon bioconversions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7276

Search results for: single carbon bioconversions

706 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu

Authors: Kaleeswari R. K., Seevagan L .

Abstract:

Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.

Keywords: soil quality index, soil attributes, soil mapping, and rice soil

Procedia PDF Downloads 65
705 Strategies by a Teaching Assistant to Support the Classroom Talk of a Child with Communication and Interaction Difficulties in Italy: A Case for Promoting Social Scaffolding Training

Authors: Lorenzo Ciletti, Ed Baines, Matt Somerville

Abstract:

Internationally, supporting staff with limited training (Teaching Assistants (TA)) has played a critical role in the education of children with special educational needs and/or disabilities (SEND). Researchers have notably illustrated that TAs support the children’s classroom tasks while teachers manage the whole class. Rarely have researchers investigated the TAs’ support for children’s participation in whole-class or peer-group talk, despite this type of “social support” playing a significant role in children’s whole-class integration and engagement with the classroom curriculum and learning. Social support seems particularly crucial for a large proportion of children with SEND, namely those with communication and interaction difficulties (e.g., autism spectrum conditions and speech impairments). This study explored TA practice and, particularly, TA social support in a rarely examined context (Italy). The Italian case was also selected as it provides TAs, known nationally as “support teachers,” with the most comprehensive training worldwide, thus potentially echoing (effective) nuanced practice internationally. Twelve hours of video recordings of a single TA and a child with communication and interaction difficulties (CID) were made. Video data was converted into frequencies of TA multidimensional support strategies, including TA social support and pedagogical assistance. TA-pupil talk oriented to children’s participation in classroom talk was also analysed into thematic patterns. These multi-method analyses were informed by social scaffolding principles: in particular, the extent to which the TA designs instruction contingently to the child’s communication and interaction difficulties and how their social support fosters the child’s highest responsibility in dealing with whole-class or peer-group talk by supplying the least help. The findings showed that the TA rarely supported the group or whole class participation of the child with CID. When doing so, the TA seemed to highly control the content and the timing of the child’s contributions to the classroom talk by a) interrupting the teacher’s whole class or group conversation to start an interaction between themselves and the child and b) reassuring the child about the correctness of their talk in private conversations and prompting them to raise their hand and intervene in the whole-class talk or c) stopping the child from contributing to the whole-class or peer-group talk when incorrect. The findings are interpreted in terms of their theoretical relation to scaffolding. They have significant implications for promoting social scaffolding in TA training in Italy and elsewhere.

Keywords: children with communication and interaction difficulties, children with special educational needs and/or disabilities, social scaffolding, teaching assistants, teaching practice, whole-class talk participation

Procedia PDF Downloads 73
704 Dry Modifications of PCL/Chitosan/PCL Tissue Scaffolds

Authors: Ozan Ozkan, Hilal Turkoglu Sasmazel

Abstract:

Natural polymers are widely used in tissue engineering applications, because of their biocompatibility, biodegradability and solubility in the physiological medium. On the other hand, synthetic polymers are also widely utilized in tissue engineering applications, because they carry no risk of infectious diseases and do not cause immune system reaction. However, the disadvantages of both polymer types block their individual usages as tissue scaffolds efficiently. Therefore, the idea of usage of natural and synthetic polymers together as a single 3D hybrid scaffold which has the advantages of both and the disadvantages of none has been entered to the literature. On the other hand, even though these hybrid structures support the cell adhesion and/or proliferation, various surface modification techniques applied to the surfaces of them to create topographical changes on the surfaces and to obtain reactive functional groups required for the immobilization of biomolecules, especially on the surfaces of synthetic polymers in order to improve cell adhesion and proliferation. In a study presented here, to improve the surface functionality and topography of the layer by layer electrospun 3D poly-epsilon-caprolactone/chitosan/poly-epsilon-caprolactone hybrid tissue scaffolds by using atmospheric pressure plasma method, thus to improve cell adhesion and proliferation of these tissue scaffolds were aimed. The formation/creation of the functional hydroxyl and amine groups and topographical changes on the surfaces of scaffolds were realized by using two different atmospheric pressure plasma systems (nozzle type and dielectric barrier discharge (DBD) type) carried out under different gas medium (air, Ar+O2, Ar+N2). The plasma modification time and distance for the nozzle type plasma system as well as the plasma modification time and the gas flow rate for DBD type plasma system were optimized with monitoring the changes in surface hydrophilicity by using contact angle measurements. The topographical and chemical characterizations of these modified biomaterials’ surfaces were carried out with SEM and ESCA, respectively. The results showed that the atmospheric pressure plasma modifications carried out with both nozzle type plasma and DBD plasma caused topographical and functionality changes on the surfaces of the layer by layer electrospun tissue scaffolds. However, the shelf life studies indicated that the hydrophilicity introduced to the surfaces was mainly because of the functionality changes. Therefore, according to the optimized results, samples treated with nozzle type air plasma modification applied for 9 minutes from a distance of 17 cm and Ar+O2 DBD plasma modification applied for 1 minute under 70 cm3/min O2 flow rate were found to have the highest hydrophilicity compared to pristine samples.

Keywords: biomaterial, chitosan, hybrid, plasma

Procedia PDF Downloads 260
703 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 92
702 Adaptation Measures as a Response to Climate Change Impacts and Associated Financial Implications for Construction Businesses by the Application of a Mixed Methods Approach

Authors: Luisa Kynast

Abstract:

It is obvious that buildings and infrastructure are highly impacted by climate change (CC). Both, design and material of buildings need to be resilient to weather events in order to shelter humans, animals, or goods. As well as buildings and infrastructure are exposed to weather events, the construction process itself is generally carried out outdoors without being protected from extreme temperatures, heavy rain, or storms. The production process is restricted by technical limitations for processing materials with machines and physical limitations due to human beings (“outdoor-worker”). In future due to CC, average weather patterns are expected to change as well as extreme weather events are expected to occur more frequently and more intense and therefore have a greater impact on production processes and on the construction businesses itself. This research aims to examine this impact by analyzing an association between responses to CC and financial performance of businesses within the construction industry. After having embedded the above depicted field of research into the resource dependency theory, a literature review was conducted to expound the state of research concerning a contingent relation between climate change adaptation measures (CCAM) and corporate financial performance for construction businesses. The examined studies prove that this field is rarely investigated, especially for construction businesses. Therefore, reports of the Carbon Disclosure Project (CDP) were analyzed by applying content analysis using the software tool MAXQDA. 58 construction companies – located worldwide – could be examined. To proceed even more systematically a coding scheme analogous to findings in literature was adopted. Out of qualitative analysis, data was quantified and a regression analysis containing corporate financial data was conducted. The results gained stress adaptation measures as a response to CC as a crucial proxy to handle climate change impacts (CCI) by mitigating risks and exploiting opportunities. In CDP reports the majority of answers stated increasing costs/expenses as a result of implemented measures. A link to sales/revenue was rarely drawn. Though, CCAM were connected to increasing sales/revenues. Nevertheless, this presumption is supported by the results of the regression analysis where a positive effect of implemented CCAM on construction businesses´ financial performance in the short-run was ascertained. These findings do refer to appropriate responses in terms of the implemented number of CCAM. Anyhow, still businesses show a reluctant attitude for implementing CCAM, which was confirmed by findings in literature as well as by findings in CDP reports. Businesses mainly associate CCAM with costs and expenses rather than with an effect on their corporate financial performance. Mostly companies underrate the effect of CCI and overrate the costs and expenditures for the implementation of CCAM and completely neglect the pay-off. Therefore, this research shall create a basis for bringing CC to the (financial) attention of corporate decision-makers, especially within the construction industry.

Keywords: climate change adaptation measures, construction businesses, financial implication, resource dependency theory

Procedia PDF Downloads 127
701 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin

Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven

Abstract:

The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).

Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts

Procedia PDF Downloads 319
700 Understanding the Qualitative Nature of Product Reviews by Integrating Text Processing Algorithm and Usability Feature Extraction

Authors: Cherry Yieng Siang Ling, Joong Hee Lee, Myung Hwan Yun

Abstract:

The quality of a product to be usable has become the basic requirement in consumer’s perspective while failing the requirement ends up the customer from not using the product. Identifying usability issues from analyzing quantitative and qualitative data collected from usability testing and evaluation activities aids in the process of product design, yet the lack of studies and researches regarding analysis methodologies in qualitative text data of usability field inhibits the potential of these data for more useful applications. While the possibility of analyzing qualitative text data found with the rapid development of data analysis studies such as natural language processing field in understanding human language in computer, and machine learning field in providing predictive model and clustering tool. Therefore, this research aims to study the application capability of text processing algorithm in analysis of qualitative text data collected from usability activities. This research utilized datasets collected from LG neckband headset usability experiment in which the datasets consist of headset survey text data, subject’s data and product physical data. In the analysis procedure, which integrated with the text-processing algorithm, the process includes training of comments onto vector space, labeling them with the subject and product physical feature data, and clustering to validate the result of comment vector clustering. The result shows 'volume and music control button' as the usability feature that matches best with the cluster of comment vectors where centroid comments of a cluster emphasized more on button positions, while centroid comments of the other cluster emphasized more on button interface issues. When volume and music control buttons are designed separately, the participant experienced less confusion, and thus, the comments mentioned only about the buttons' positions. While in the situation where the volume and music control buttons are designed as a single button, the participants experienced interface issues regarding the buttons such as operating methods of functions and confusion of functions' buttons. The relevance of the cluster centroid comments with the extracted feature explained the capability of text processing algorithms in analyzing qualitative text data from usability testing and evaluations.

Keywords: usability, qualitative data, text-processing algorithm, natural language processing

Procedia PDF Downloads 270
699 Totally Implantable Venous Access Device for Long Term Parenteral Nutrition in a Patient with High Output Enterocutaneous Fistula Due to Advanced Malignancy

Authors: Puneet Goyal, Aarti Agarwal

Abstract:

Background and Objective: Nutritional support is an integral part of palliative care of advanced non-resectable abdominal malignancy patients, though is frequently neglected aspect. Non-Healing high output Entero-cutaneous fistulas sometimes require long term parenteral nutrition, to take care of catabolism and replacement of nutrients. We present a case of inoperable pancreatic malignancy with high output entero-cutaneous fistula, which was provided parenteral nutritional support with the use of Totally Implantable Venous Access Device (TIVAD). Method and Results: 55 year old man diagnosed with carcinoma pancreas had developed high entero-cutaneous fistula. His tumor was found to be inoperable and was on total parenteral nutrition through routine central line. This line was difficult to maintain as he required it for a long term TPN. He was planned to undergo Totally Implantable Venous Access Device (TIVAD) implantation. 8Fr single lumen catheter with Groshong non-return Valve (Bard Access Systems, Inc. USA) was inserted through right internal jugular vein, under fluoroscopic guidance. The catheter was tunneled subcutaneously and brought towards infraclavicular pocket, cut at appropriate length and connected to port and locked. Port was sutured in floor of pocket. Free flow of blood aspirated, flushed with heparinized saline. There was no kink observed in entire length of catheter under fluoroscopy. Skin over infraclavicular pocket was sutured. Long term catheter care and associated risks were explained to patient and relatives. Patient continued to receive total parenteral nutrition as well as other supportive therapy though TIVAD for next 6 weeks, till his demise. Conclusion: TIVADs are standard of care for long term venous access solutions in cancer patients requiring chemotherapy. In this case, we extended its use for providing parenteral nutrition and other supportive therapy. TIVADs can be implanted in advanced cancer patients for providing venous access solution required for various palliative treatments and medications. This will help in improving quality of life and satisfaction amongst terminally ill cancer patients.

Keywords: parenteral nutrition, totally implantable venous access device, long term venous access, interventions in anesthesiology

Procedia PDF Downloads 223
698 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 284
697 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 59
696 Development and Validation of a Semi-Quantitative Food Frequency Questionnaire for Use in Urban and Rural Communities of Rwanda

Authors: Phenias Nsabimana, Jérôme W. Some, Hilda Vasanthakaalam, Stefaan De Henauw, Souheila Abbeddou

Abstract:

Tools for the dietary assessment in adults are limited in low- and middle-income settings. The objective of this study was to develop and validate a semi-quantitative food frequency questionnaire (FFQ) against the multiple pass-24 h recall tool for use in urban and rural Rwanda. A total of 212 adults (154 females and 58 males), 18-49 aged, including 105 urban and 107 rural residents, from the four regions of Rwanda, were recruited in the present study. A multiple-pass 24- H recall technique was used to collect dietary data in both urban and rural areas in four different rounds, on different days (one weekday and one weekend day), separated by a period of three months, from November 2020 to October 2021. The details of all the foods and beverages consumed over the 24h period of the day prior to the interview day were collected during face-to-face interviews. A list of foods, beverages, and commonly consumed recipes was developed by the study researchers and ten research assistants from the different regions of Rwanda. Non-standard recipes were collected when the information was available. A single semi-quantitative FFQ was also developed in the same group discussion prior to the beginning of the data collection. The FFQ was collected at the beginning and the end of the data collection period. Data were collected digitally. The amount of energy and macro-nutrients contributed by each food, recipe, and beverage will be computed based on nutrient composition reported in food composition tables and weight consumed. Median energy and nutrient contents of different food intakes from FFQ and 24-hour recalls and median differences (24-hour recall –FFQ) will be calculated. Kappa, Spearman, Wilcoxon, and Bland-Altman plot statistics will be conducted to evaluate the correlation between estimated nutrient and energy intake found by the two methods. Differences will be tested for their significance and all analyses will be done with STATA 11. Data collection was completed in November 2021. Data cleaning is ongoing and the data analysis is expected to be completed by July 2022. A developed and validated semi-quantitative FFQ will be available for use in dietary assessment. The developed FFQ will help researchers to collect reliable data that will support policy makers to plan for proper dietary change intervention in Rwanda.

Keywords: food frequency questionnaire, reproducibility, 24-H recall questionnaire, validation

Procedia PDF Downloads 122
695 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces

Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani

Abstract:

A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.

Keywords: readiness, maturity, system, integration

Procedia PDF Downloads 72
694 Beware the Trolldom: Speculative Interests and Policy Implications behind the Circulation of Damage Claims

Authors: Antonio Davola

Abstract:

Moving from the evaluations operated by Richard Posner in his judgment on the case Carhart v. Halaska, the paper seeks to analyse the so-called ‘litigation troll’ phenomenon and the development of a damage claims market, i.e. a market in which the right to propose claims is voluntary exchangeable for money and can be asserted by private buyers. The aim of our study is to assess whether the implementation of a ‘damage claims market’ might represent a resource for victims or if, on the contrary, it might operate solely as a speculation tool for private investors. The analysis will move from the US experience, and will then focus on the EU framework. Firstly, the paper will analyse the relation between the litigation troll phenomenon and the patent troll activity: even though these activities are considered similar by Posner, a comparative study shows how these practices significantly differ in their impact on the market and on consumer protection, even moving from similar economic perspectives. The second part of the paper will focus on the main specific concerns related to the litigation trolling activity. The main issues that will be addressed are the risk that the circulation of damage claims might spur non-meritorious litigation and the implications of the misalignment between the victim of a tort and the actual plaintiff in court arising from the sale of a claim. In its third part, the paper will then focus on the opportunities and benefits that the introduction and regulation of a claims market might imply both for potential claims sellers and buyers, in order to ultimately assess whether such a solution might actually increase individual’s legal empowerment. Through the damage claims market compensation would be granted more quickly and easily to consumers who had suffered harm: tort victims would, in fact, be compensated instantly upon the sale of their claims without any burden of proof. On the other hand, claim-buyers would profit from the gap between the amount that a consumer would accept for an immediate refund and the compensation awarded in court. In the fourth part of the paper, the analysis will focus on the legal legitimacy of the litigation trolling activity in the US and the EU framework. Even though there is no express provision that forbids the sale of the right to pursue a claim in court - or that deems such a right to be non-transferable – procedural laws of single States (especially in the EU panorama) must be taken into account in evaluating this aspect. The fifth and final part of the paper will summarize the various data collected to suggest an evaluation on if, and through which normative solutions, the litigation trolling might comport benefits for competition and which would be its overall effect over consumer’s protection.

Keywords: competition, claims, consumer's protection, litigation

Procedia PDF Downloads 220
693 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 254
692 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection

Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino

Abstract:

The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.

Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem

Procedia PDF Downloads 81
691 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 395
690 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 166
689 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System

Authors: Rajan Goyal, S. Lamba, S. Annapoorni

Abstract:

The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.

Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve

Procedia PDF Downloads 176
688 Investigation of Clusters of MRSA Cases in a Hospital in Western Kenya

Authors: Lillian Musila, Valerie Oundo, Daniel Erwin, Willie Sang

Abstract:

Staphylococcus aureus infections are a major cause of nosocomial infections in Kenya. Methicillin resistant S. aureus (MRSA) infections are a significant burden to public health and are associated with considerable morbidity and mortality. At a hospital in Western Kenya two clusters of MRSA cases emerged within short periods of time. In this study we explored whether these clusters represented a nosocomial outbreak by characterizing the isolates using phenotypic and molecular assays and examining epidemiological data to identify possible transmission patterns. Specimens from the site of infection of the subjects were collected, cultured and S. aureus isolates identified phenotypically and confirmed by APIStaph™. MRSA were identified by cefoxitin disk screening per CLSI guidelines. MRSA were further characterized based on their antibiotic susceptibility patterns and spa gene typing. Characteristics of cases with MRSA isolates were compared with those with MSSA isolated around the same time period. Two cases of MRSA infection were identified in the two week period between 21 April and 4 May 2015. A further 2 MRSA isolates were identified on the same day on 7 September 2015. The antibiotic resistance patterns of the two MRSA isolates in the 1st cluster of cases were different suggesting that these were distinct isolates. One isolate had spa type t2029 and the other had a novel spa type. The 2 isolates were obtained from urine and an open skin wound. In the 2nd cluster of MRSA isolates, the antibiotic susceptibility patterns were similar but isolates had different spa types: one was t037 and the other a novel spa type different from the novel MRSA spa type in the first cluster. Both cases in the second cluster were admitted into the hospital but one infection was community- and the other hospital-acquired. Only one of the four MRSA cases was classified as an HAI from an infection acquired post-operatively. When compared to other S. aureus strains isolated within the same time period from the same hospital only one spa type t2029 was found in both MRSA and non-MRSA strains. None of the cases infected with MRSA in the two clusters shared any common epidemiological characteristic such as age, sex or known risk factors for MRSA such as prolonged hospitalization or institutionalization. These data suggest that the observed MRSA clusters were multi strain clusters and not an outbreak of a single strain. There was no clear relationship between the isolates by spa type suggesting that no transmission was occurring within the hospital between these cluster cases but rather that the majority of the MRSA strains were circulating in the community. There was high diversity of spa types among the MRSA strains with none of the isolates sharing spa types. Identification of disease clusters in space and time is critical for immediate infection control action and patient management. Spa gene typing is a rapid way of confirming or ruling out MRSA outbreaks so that costly interventions are applied only when necessary.

Keywords: cluster, Kenya, MRSA, spa typing

Procedia PDF Downloads 310
687 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 320
686 Transport of Reactive Carbo-Iron Composite Particles for in situ Groundwater Remediation Investigated at Laboratory and Field Scale

Authors: Sascha E. Oswald, Jan Busch

Abstract:

The in-situ dechlorination of contamination by chlorinated solvents in groundwater via zero-valent iron (nZVI) is potentially an efficient and prompt remediation method. A key requirement is that nZVI has to be introduced in the subsurface in a way that substantial quantities of the contaminants are actually brought into direct contact with the nZVI in the aquifer. Thus it could be a more flexible and precise alternative to permeable reactive barrier techniques using granular iron. However, nZVI are often limited by fast agglomeration and sedimentation in colloidal suspensions, even more so in the aquifer sediments, which is a handicap for the application to treat source zones or contaminant plumes. Colloid-supported nZVI show promising characteristics to overcome these limitations and Carbo-Iron Colloids is a newly developed composite material aiming for that. The nZVI is built onto finely ground activated carbon of about a micrometer diameter acting as a carrier for it. The Carbo-Iron Colloids are often suspended with a polyanionic stabilizer, and carboxymethyl cellulose is one with good properties for that. We have investigated the transport behavior of Carbo-Iron Colloids (CIC) on different scales and for different conditions to assess its mobility in aquifer sediments as a key property for making its application feasible. The transport properties were tested in one-dimensional laboratory columns, a two-dimensional model aquifer and also an injection experiment in the field. Those experiments were accompanied by non-invasive tomographic investigations of the transport and filtration processes of CIC suspensions. The laboratory experiments showed that a larger part of the CIC can travel at least scales of meters for favorable but realistic conditions. Partly this is even similar to a dissolved tracer. For less favorable conditions this can be much smaller and in all cases a particular fraction of the CIC injected is retained mainly shortly after entering the porous medium. As field experiment a horizontal flow field was established, between two wells with a distance of 5 meters, in a confined, shallow aquifer at a contaminated site in North German lowlands. First a tracer test was performed and a basic model was set up to define the design of the CIC injection experiment. Then CIC suspension was introduced into the aquifer at the injection well while the second well was pumped and samples taken there to observe the breakthrough of CIC. This was based on direct visual inspection and total particle and iron concentrations of water samples analyzed in the laboratory later. It could be concluded that at least 12% of the CIC amount injected reached the extraction well in due course, some of it traveling distances larger than 10 meters in the non-uniform dipole flow field. This demonstrated that these CIC particles have a substantial mobility for reaching larger volumes of a contaminated aquifer and for interacting there by their reactivity with dissolved contaminants in the pore space. Therefore they seem suited well for groundwater remediation by in-situ formation of reactive barriers for chlorinated solvent plumes or even source removal.

Keywords: carbo-iron colloids, chlorinated solvents, in-situ remediation, particle transport, plume treatment

Procedia PDF Downloads 232
685 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 138
684 Preoperative Smoking Cessation Audit: A Single Centre Experience from Metropolitan Melbourne

Authors: Ya-Chu May Tsai, Ibrahim Yacoub, Eoin Casey

Abstract:

The Australian and New Zealand College of Anaesthetists (ANZCA) advises that smoking should not be permitted within 12 hours of surgery. There is little information in the medical literature regarding patients awareness of perioperative smoking cessation recommendations nor their appreciation of how smoking might negatively impact their perioperative course. The aim of the study is to assess the prevalence of current smokers presenting to Werribee Mercy Hospital (WMH) and to evaluate if pre-operative provision of both written and verbal pre-operative advice was, 1: Effective in improving patient awareness of the benefits of pre-operative smoking cessation, 2: Associated with an increase in the number of elective surgical patients who stop smoking at least 12 hours pre-operatively. Methods: The initial survey included all patients who presented to WMH for elective surgical procedures from 19 – 30 September 2016 using a standardized questionnaire focused on patients’ smoking history and their awareness of smoking cessation preoperatively. The intervention consisted of a standard pre-operative phone call to all patients advising them of the increased perioperative risks associated with smoking, and advised patients to cease 12 hours prior. In addition, written information on smoking cessation strategies were sent out in mail at least 1 week prior to planned procedure date to all patients. Questionnaire-based study after the intervention was conducted on day of elective procedure from 10 – 21 October 2016 inclusive. Primary outcomes measured were patient’s awareness of smoking cessation and proportion of smokers who quit >12 hours, considered a clinically meaning duration to reduce anaesthetics complications. Comparison of pre and post intervention results were made using SPSS 21.0. Results: In the pre-intervention group (n=156), 36 (22.4%) patients were current smokers, 46 were ex-smokers (29.5%) and 74 were non-smokers (48.1%). Of the smokers, 12 (33%) reported having been informed of smoking cessation prior to operation and 8 (22%) were aware of increased intra- and perioperative adverse events associated with smoking. In the post-intervention group n= 177, 38 (21.5%) patients were current smokers, 39 were ex-smokers (22.0%) and 100 were non-smokers (56.5%). Of the smokers, 32 (88.9%) reported having been informed of smoking cessation prior to operation and 35 (97.2%) reported being aware of increased intra- and perioperative adverse events associated with smoking. The median time since last smoke in the pre-intervention group was 5.5 hours (Q1-Q3 = 2-14) compared with 13 hours (Q1-Q3 = 5-24) in post intervention group. Amongst the smokers, smoking cessation at least 12 hours prior to surgery significantly increased from 27.8% pre-intervention to 52.6% post intervention (P=0.03). Conclusion: A standard preoperative phone call and written instruction on smoking cessation guidelines at time of waitlist placement increase preoperative smoking cessation rates by almost 2-fold.

Keywords: anaesthesia, audit, perioperative medicine, smoking cessation

Procedia PDF Downloads 287
683 Assessing the Severity of Traffic Related Air Pollution in South-East London to School Pupils

Authors: Ho Yin Wickson Cheung, Liora Malki-Epshtein

Abstract:

Outdoor air pollution presents a significant challenge for public health globally, especially in urban areas, with road traffic acting as the primary contributor to air pollution. Several studies have documented the antagonistic relation between traffic-related air pollution (TRAP) and the impact on health, especially to the vulnerable group of population, particularly young pupils. Generally, TRAP could cause damage to their brain, restricting the ability of children to learn and, more importantly, causing detrimental respiratory issues in later life. Butlittle is known about the specific exposure of children at school during the school day and the impact this may have on their overall exposure to pollution at a crucial time in their development. This project has set out to examine the air quality across primary schools in South-East London and assesses the variability of data found based on their geographic location and surroundings. Nitrogen dioxide, PM contaminants, and carbon dioxide were collected with diffusion tubes and portable monitoring equipment for eight schools across three local areas, that are Greenwich, Lewisham, and Tower Hamlets. This study first examines the geographical features of the schools surrounding (E.g., coverage of urban road structure and green infrastructure), then utilize three different methods to capture pollutants data. Moreover, comparing the obtained results with existing data from monitoring stations to understand the differences in air quality before and during the pandemic. Furthermore, most studies in this field have unfortunately neglected human exposure to pollutants and calculated based on values from fixed monitoring stations. Therefore, this paper introduces an alternative approach by calculating human exposure to air pollution from real-time data obtained when commuting within related areas (Driving routes and field walking). It is found that schools located highly close to motorways are generally not suffering from the most air pollution contaminants. Instead, one with the worst traffic congested routes nearby might also result in poor air quality. Monitored results also indicate that the annual air pollution values have slightly decreased during the pandemic. However, the majority of the data is currently still exceeding the WHO guidelines. Finally, the total human exposures for NO2 during commuting in the two selected routes were calculated. Results illustrated the total exposure for route 1 were 21,730 μm/m3 and 28,378.32 μm/m3, and for route 2 were 30,672 μm/m3 and 16,473 μm/m3. The variance that occurred might be due to the difference in traffic volume that requires further research. Exposure for NO2 during commuting was plotted with detailed timesteps that have shown their peak usually occurred while commuting. These have consolidated the initial assumption to the extremeness of TRAP. To conclude, this paper has yielded significant benefits to understanding air quality across schools in London with the new approach of capturing human exposure (Driving routes). Confirming the severity of air pollution and promoting the necessity of considering environmental sustainability for policymakers during decision making to protect society's future pillars.

Keywords: air pollution, schools, pupils, congestion

Procedia PDF Downloads 104
682 Welfare and Sustainability in Beef Cattle Production on Tropical Pasture

Authors: Andre Pastori D'Aurea, Lauriston Bertelli Feranades, Luis Eduardo Ferreira, Leandro Dias Pinto, Fabiana Ayumi Shiozaki

Abstract:

The aim of this study was to improve the production of beef cattle on tropical pasture without harming this environment. On tropical pastures, cattle's live weight gain is lower than feedlot, and forage production is seasonable, changing from season to season. Thus, concerned with sustainable livestock production, the Premix Company has developed strategies to improve the production of beef cattle on tropical pasture to ensure sustainability of welfare and production. There are two important principles in this productivity system: 1) increase individual gains with use of better supplementation and 2) increase the productivity units with better forage quality like corn silage or other forms of forage conservations, actually used only in winter, and adding natural additives in the diet. This production system was applied from June 2017 to May 2018 in the Research Center of Premix Company, Patrocínio Paulista, São Paulo State, Brazil. The area used had 9 hectares of pasture of Brachiaria brizantha. 36 steers Nellore were evaluated for one year. The initial weight was 253 kg. The parameters used were daily average gain and gain per area. This indicated the corrections to be made and helped design future fertilization. In this case, we fertilized the pasture with 30 kg of nitrogen per animal divided into two parts. The diet was pasture and protein-energy supplements (0.4% of live weight). The supplement used was added with natural additive Fator P® – Premix Company). Fator P® is an additive composed by amino acids (lysine, methionine and tyrosine, 16400, 2980 and 3000 mg.kg-1 respectively), minerals, probiotics (Saccharomyces cerevisiae, 7 x 10E8 CFU.kg-1) and essential fatty acids (linoleic and oleic acids, 108.9 and 99g.kg-1 respectively). Due to seasonal changes, in the winter we supplemented the diet by increasing the offer of forage, supplementing with maize silage. It was offered 1% of live weight in silage corn and 0.4% of the live weight in protein-energetic supplements with additive Fator P ®. At the end of the period, the productivity was calculated by summing the individual gains for the area used. The average daily gain of the animals were 693 grams per day and was produced 1.005 kg /hectare/year. This production is about 8 times higher than the average of Brazilian meat national production. To succeed in this project, it is necessary to increase the gains per area, so it is necessary to increase the capacity per area. Pasture management is very important to the project's success because the dietary decisions were taken from the quantity and quality of the forage. We, therefore, recommend the use of animals in the growth phase because the response to supplementation is greater in that phase and we can allocate more animals per area. This system's carbon footprint reduces emissions by 61.2 percent compared to the Brazilian average. This beef cattle production system can be efficient and environmentally friendly to the natural. Another point is that bovines will benefit from their natural environment without competing or having an impact on human food production.

Keywords: cattle production, environment, pasture, sustainability

Procedia PDF Downloads 127
681 Barrier Analysis of Sustainable Development of Small Towns: A Perspective of Southwest China

Authors: Yitian Ren, Liyin Shen, Tao Zhou, Xiao Li

Abstract:

The past urbanization process in China has brought out series of problems, the Chinese government has then positioned small towns in essential roles for implementing the strategy 'The National New-type Urbanization Plan (2014-2020)'. As the connector and transfer station of cities and countryside, small towns are important force to narrow the gap between urban and rural area, and to achieve the mission of new-type urbanization in China. The sustainable development of small towns plays crucial role because cities are not capable enough to absorb the surplus rural population. Nevertheless, there are various types of barriers hindering the sustainable development of small towns, which led to the limited development of small towns and has presented a bottleneck in Chinese urbanization process. Therefore, this paper makes deep understanding of these barriers, thus effective actions can be taken to address them. And this paper chooses the perspective of Southwest China (refers to Sichuan province, Yunnan province, Guizhou province, Chongqing Municipality City and Tibet Autonomous Region), cause the urbanization rate in Southwest China is far behind the average urbanization level of the nation and the number of small towns accounts for a great proportion in mainland China, also the characteristics of small towns in Southwest China are distinct. This paper investigates the barriers of sustainable development of small towns which located in Southwest China by using the content analysis method, combing with the field work and interviews in sample small towns, then identified and concludes 18 barriers into four dimensions, namely, institutional barriers, economic barriers, social barriers and ecological barriers. Based on the research above, questionnaire survey and data analysis are implemented, thus the key barriers hinder the sustainable development of small towns in Southwest China are identified by using fuzzy set theory, those barriers are, lack of independent financial power, lack of construction land index, financial channels limitation, single industrial structure, topography variety and complexity, which mainly belongs to institutional barriers and economic barriers. In conclusion part, policy suggestions are come up with to improve the politic and institutional environment of small town development, also the market mechanism are supposed to be introduced to the development process of small towns, which can effectively overcome the economic barriers, promote the sustainable development of small towns, accelerate the in-situ urbanization by absorbing peasants in nearby villages, and achieve the mission of new-type urbanization in China from the perspective of people-oriented.

Keywords: barrier analysis, sustainable development, small town, Southwest China

Procedia PDF Downloads 327
680 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak

Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi

Abstract:

This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.

Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak

Procedia PDF Downloads 136
679 Quality of Life of Elderly and Factors Associated in Bharatpur Metropolitan City, Chitwan: A Mixed Method Study

Authors: Rubisha Adhikari, Rajani Shah

Abstract:

Introduction: Aging is a natural, global and inevitable phenomenon every single person has to go through, and nobody can escape the process. One of the emerging challenges to public health is to improve the quality of later years of life as life expectancy continues to increase. Quality of life (QoL) has grown to be a key goal for many public health initiatives. Population aging has become a global phenomenon as they are growing more quickly in emerging nations than they are in industrialized nations, leaving minimal opportunities to regulate the consequences of the demographic shift. Methods: A community-based descriptive analytical approach was used to examine the quality of life and associated factors among elderly people. A mixed method was chosen for the study. For the quantitative data collection, a household survey was conducted using the WHOQOL-OLD tool. In-depth interviews were conducted among twenty participants for qualitative data collection. Data generated through in-depth interviews were transcribed verbatim. In-depth interviews lasted about an hour and were audio recorded. The in-depth interview guide had been developed by the research team and pilot-tested before actual interviews. Results: This study result showed the association between quality of life and socio-demographic variables. Among all the variables under socio-demographic variable of this study, age (ꭓ2=14.445, p=0.001), gender (ꭓ2=14.323, p=<0.001), marital status (ꭓ2=10.816, p=0.001), education status (ꭓ2=23.948, p=<0.001), household income (ꭓ2=13.493, p=0.001), personal income (ꭓ2=14.129, p=0.001), source of personal income (ꭓ2=28.332,p=<0.001), social security allowance (ꭓ2=18.005,p=<0.001), alcohol consumption (ꭓ2=9.397,p=0.002) are significantly associated with quality of life of elderly. In addition, affordability (ꭓ2=12.088, p=0.001), physical activity (ꭓ2=9.314, p=0.002), emotional support (ꭓ2=9.122, p=0.003), and economic support (ꭓ2=8.104, p=0.004) are associated with quality of life of elderly people. Conclusion: In conclusion, this mixed method study provides insight into the attributes of the quality of life of elderly people in Nepal and similar settings. As the geriatric population is growing in full swing, maintaining a high quality of life has become a major challenge. This study showed that determinants such as age, gender, marital status, education status, household income, personal income, source of personal income, social security allowance and alcohol consumption, economic support, emotional support, affordability and physical activity have an association with quality of life of the elderly.

Keywords: ageing, chitwan, elderly, health status, quality of life

Procedia PDF Downloads 47
678 On-Farm Mechanized Conservation Agriculture: Preliminary Agro-Economic Performance Difference between Disc Harrowing, Ripping and No-Till

Authors: Godfrey Omulo, Regina Birner, Karlheinz Koller, Thomas Daum

Abstract:

Conservation agriculture (CA) as a climate-resilient and sustainable practice have been carried out for over three decades in Zambia. However, its continued promotion and adoption has been predominantly on a small-scale basis. Despite the plethora of scholarship pointing to the positive benefits of CA in regard to enhanced yield, profitability, carbon sequestration and minimal environmental degradation, these have not stimulated commensurate agricultural extensification desired for Zambia. The objective of this study was to investigate the potential differences between mechanized conventional and conservation tillage practices on operation time, fuel consumption, labor costs, soil moisture retention, soil temperature and crop yield. An on-farm mechanized conservation agriculture (MCA) experiment arranged in a randomized complete block design with four replications was used. The research was conducted on a 15 ha of sandy loam rainfed land: soybeans on 7ha with plot dimensions of 24 m by 210 m and maize on 8ha with plot dimensions of 24 m by 250 m. The three tillage treatments were: residue burning followed by disc harrowing, ripping tillage and no-till. The crops were rotated in two subsequent seasons. All operations were done using a 60hp 2-wheel tractor, a disc harrow, a two-tine ripper and a two-row planter. Soil measurements and the agro-economic factors were recorded for two farming seasons. The season results showed that the yield of maize and soybeans under no-till and ripping tillage practices were not significantly different from the conventional burning and discing. But, there was a significant difference in soil moisture content between no-till (25.31SFU±2.77) and disced (11.91SFU±0.59) plots at depths from 10-60 cm. Soil temperature in no-till plots (24.59°C±0.91) was significantly lower compared to the disced plots (26.20°C±1.75) at the depths 15 cm and 45 cm. For maize, there was a significant difference in operation time between disc-harrowed (3.68hr/ha±1.27) and no-till (1.85hr/ha±0.04) plots, and a significant difference in cost of labor between disc-harrowed (45.45$/ha±19.56) and no-till (21.76$/ha) plots. There was no significant difference in fuel consumption between ripping and disc-harrowing and direct seeding. For soybeans, there was a significant difference in operation time between no-tillage (1.96hr/ha±0.31) and ripping (3.34hr/ha±0.53) and disc harrowing (3.30hr/ha±0.16). Further, fuel consumption and labor on no-till plots were significantly different from both the ripped and disc-harrowed plots. The high seed emergence percentage on maize disc-harrowed plot (93.75%±5.87) was not significantly different from ripping and no-till plots. Again, the high seed emergence percentage for the soybean ripped plot (93.75%±13.03) had no significant difference with discing and ripping. The results show that it is economically sound and timesaving to practice MCA and get viable yields compared to conventional farming. This research fills the gap on the potential of MCA in the context of Zambia and its profitability in incentivizing policymakers to invest in appropriate and sustainable machinery and implements for extensive agricultural production.

Keywords: climate-smart agriculture, labor cost, mechanized conservation agriculture, soil moisture, Zambia

Procedia PDF Downloads 134
677 Aerobic Biodegradation of a Chlorinated Hydrocarbon by Bacillus Cereus 2479

Authors: Srijata Mitra, Mobina Parveen, Pranab Roy, Narayan Chandra Chattopadhyay

Abstract:

Chlorinated hydrocarbon can be a major pollution problem in groundwater as well as soil. Many people interact with these chemicals on daily accidentally or by professionally in the laboratory. One of the most common sources for Chlorinated hydrocarbon contamination of soil and groundwater are industrial effluents. The wide use and discharge of Trichloroethylene (TCE), a volatile chlorohydrocarbon from chemical industry, led to major water pollution in rural areas. TCE is an mainly used as an industrial metal degreaser in industries. Biotransformation of TCE to the potent carcinogen vinyl chloride (VC) by consortia of anaerobic bacteria might have role for the above purpose. For these reasons, the aim of current study was to isolate and characterized the genes involved in TCE metabolism and also to investigate the in silico study of those genes. To our knowledge, only one aromatic dioxygenase system, the toluene dioxygenase in Pseudomonas putida F1 has been shown to be involved in TCE degradation. This is first instance where Bacillus cereus group being used in biodegradation of trichloroethylene. A novel bacterial strain 2479 was isolated from oil depot site at Rajbandh, Durgapur (West Bengal, India) by enrichment culture technique. It was identified based on polyphasic approach and ribotyping. The bacterium was gram positive, rod shaped, endospore forming and capable of degrading trichloroethylene as the sole carbon source. On the basis of phylogenetic data and Fatty Acid Methyl Ester Analysis, strain 2479 should be placed within the genus Bacillus and species cereus. However, the present isolate (strain 2479) is unique and sharply different from the usual Bacillus strains in its biodegrading nature. Fujiwara test was done to estimate that the strain 2479 could degrade TCE efficiently. The gene for TCE biodegradation was PCR amplified from genomic DNA of Bacillus cereus 2479 by using todC1 gene specific primers. The 600bp amplicon was cloned into expression vector pUC I8 in the E. coli host XL1-Blue and expressed under the control of lac promoter and nucleotide sequence was determined. The gene sequence was deposited at NCBI under the Accession no. GU183105. In Silico approach involved predicting the physico-chemical properties of deduced Tce1 protein by using ProtParam tool. The tce1 gene contained 342 bp long ORF encoding 114 amino acids with a predicted molecular weight 12.6 kDa and the theoretical pI value of the polypeptide was 5.17, molecular formula: C559H886N152O165S8, total number of atoms: 1770, aliphatic index: 101.93, instability index: 28.60, Grand Average of Hydropathicity (GRAVY): 0.152. Three differentially expressed proteins (97.1, 40 and 30 kDa) were directly involved in TCE biodegradation, found to react immunologically to the antibodies raised against TCE inducible proteins in Western blot analysis. The present study suggested that cloned gene product (TCE1) was capable of degrading TCE as verified chemically.

Keywords: cloning, Bacillus cereus, in silico analysis, TCE

Procedia PDF Downloads 381