Search results for: reducing waste
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5833

Search results for: reducing waste

463 Computational Fluid Dynamics (CFD) Calculations of the Wind Turbine with an Adjustable Working Surface

Authors: Zdzislaw Kaminski, Zbigniew Czyz, Krzysztof Skiba

Abstract:

This paper discusses the CFD simulation of a flow around a rotor of a Vertical Axis Wind Turbine. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed and avoid a costly preparation of a model or a prototype for a bench test. CFD simulation enables us to compare characteristics of aerodynamic forces acting on rotor working surfaces and define operational parameters like torque or power generated by a turbine assembly. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angular aperture α increases, the working surface which absorbs wind kinetic energy also increases. The operation of turbines is characterized by parameters like the angular aperture of blades, power, torque, speed for a given wind speed. These parameters have an impact on the efficiency of assemblies. The distribution of forces acting on the working surfaces in our turbine changes according to the angular velocity of the rotor. Moreover, the resultant force from the force acting on an advancing blade and retreating blade should be as high as possible. This paper is part of the research to improve an efficiency of a rotor assembly. Therefore, using simulation, the courses of the above parameters were studied in three full rotations individually for each of the blades for three angular apertures of blade working surfaces, i.e. 30 °, 60 °, 90 °, at three wind speeds, i.e. 4 m / s, 6 m / s, 8 m / s and rotor speeds ranging from 100 to 500 rpm. Finally, there were created the characteristics of torque coefficients and power as a function of time for each blade separately and for the entire rotor. Accordingly, the correlation between the turbine rotor power as a function of wind speed for varied values of rotor rotational speed. By processing this data, the correlation between the power of the turbine rotor and its rotational speed for each of the angular aperture of the working surfaces was specified. Finally, the optimal values, i.e. of the highest output power for given wind speeds were read. The research results in receiving the basic characteristics of turbine rotor power as a function of wind speed for the three angular apertures of the blades. Given the nature of rotor operation, the growth in the output turbine can be estimated if angular aperture of the blades increases. The controlled adjustment of angle α enables a smooth adjustment of power generated by a turbine rotor. If wind speed is significant, this type of adjustment enables this output power to remain at the same level (by reducing angle α) with no risk of damaging a construction. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: computational fluid dynamics, numerical analysis, renewable energy, wind turbine

Procedia PDF Downloads 193
462 Capital Accumulation and Unemployment in Namibia, Nigeria and South Africa

Authors: Abubakar Dikko

Abstract:

The research investigates the causes of unemployment in Namibia, Nigeria and South Africa, and the role of Capital Accumulation in reducing the unemployment profile of these economies as proposed by the post-Keynesian economics. This is conducted through extensive review of literature on the NAIRU models and focused on the post-Keynesian view of unemployment within the NAIRU framework. The NAIRU (non-accelerating inflation rate of unemployment) model has become a dominant framework used in macroeconomic analysis of unemployment. The study views the post-Keynesian economics arguments that capital accumulation is a major determinant of unemployment. Unemployment remains the fundamental socio-economic challenge facing African economies. It has been a burden to citizens of those economies. Namibia, Nigeria and South Africa are great African nations battling with high unemployment rates. In 2013, the countries recorded high unemployment rates of 16.9%, 23.9% and 24.9% respectively. Most of the unemployed in these economies comprises of youth. Roughly about 40% working age South Africans has jobs, whereas in Nigeria and Namibia is less than that. Unemployment in Africa has wide implications on households which has led to extensive poverty and inequality, and created a rampant criminality. Recently in South Africa there has been a case of xenophobic attacks which were caused by the citizens of the country as a result of unemployment. The high unemployment rate in the country led the citizens to chase away foreigners in the country claiming that they have taken away their jobs. The study proposes that there is a strong relationship between capital accumulation and unemployment in Namibia, Nigeria and South Africa, and capital accumulation is responsible for high unemployment rates in these countries. For the economies to achieve steady state level of employment and satisfactory level of economic growth and development there is need for capital accumulation to take place. The countries in the study have been selected after a critical research and investigations. They are selected based on the following criteria; African economies with high unemployment rates above 15% and have about 40% of their workforce unemployed. This level of unemployment is the critical level of unemployment in Africa as expressed by International Labour Organization (ILO). The African countries with low level of capital accumulation. Adequate statistical measures have been employed using a time-series analysis in the study and the results revealed that capital accumulation is the main driver of unemployment performance in the chosen African countries. An increase in the accumulation of capital causes unemployment to reduce significantly. The results of the research work will be useful and relevant to federal governments and ministries, departments and agencies (MDAs) of Namibia, Nigeria and South Africa to resolve the issue of high and persistent unemployment rates in their economies which are great burden that slows growth and development of developing economies. Also, the result can be useful to World Bank, African Development Bank and International Labour Organization (ILO) in their further research and studies on how to tackle unemployment in developing and emerging economies.

Keywords: capital accumulation, unemployment, NAIRU, Post-Keynesian economics

Procedia PDF Downloads 242
461 The Use of Stroke Journey Map in Improving Patients' Perceived Knowledge in Acute Stroke Unit

Authors: C. S. Chen, F. Y. Hui, B. S. Farhana, J. De Leon

Abstract:

Introduction: Stroke can lead to long-term disability, affecting one’s quality of life. Providing stroke education to patient and family members is essential to optimize stroke recovery and prevent recurrent stroke. Currently, nurses conduct stroke education by handing out pamphlets and explaining their contents to patients. However, this is not always effective as nurses have varying levels of knowledge and depth of content discussed with the patient may not be consistent. With the advancement of information technology, health education is increasingly being disseminated via electronic software and studies have shown this to have benefitted patients. Hence, a multi-disciplinary team consisting of doctors, nurses and allied health professionals was formed to create the stroke journey map software to deliver consistent and concise stroke education. Research Objectives: To evaluate the effectiveness of using a stroke journey map software in improving patients’ perceived knowledge in the acute stroke unit during hospitalization. Methods: Patients admitted to the acute stroke unit were given stroke journey map software during patient education. The software consists of 31 interactive slides that are brightly coloured and 4 videos, based on input provided by the multi-disciplinary team. Participants were then assessed with pre-and-post survey questionnaires before and after viewing the software. The questionnaire consists of 10 questions with a 5-point Likert scale which sums up to a total score of 50. The inclusion criteria are patients diagnosed with ischemic stroke and are cognitively alert and oriented. This study was conducted between May 2017 to October 2017. Participation was voluntary. Results: A total of 33 participants participated in the study. The results demonstrated that the use of a stroke journey map as a stroke education medium was effective in improving patients’ perceived knowledge. A comparison of pre- and post-implementation data of stroke journey map revealed an overall mean increase in patients’ perceived knowledge from 24.06 to 40.06. The data is further broken down to evaluate patients’ perceived knowledge in 3 domains: (1) Understanding of disease process; (2) Management and treatment plans; (3) Post-discharge care. Each domain saw an increase in mean score from 10.7 to 16.2, 6.9 to 11.9 and 6.6 to 11.7 respectively. Project Impact: The implementation of stroke journey map has a positive impact in terms of (1) Increasing patient’s perceived knowledge which could contribute to greater empowerment of health; (2) Reducing need for stroke education material printouts making it environmentally friendly; (3) Decreasing time nurses spent on giving education resulting in more time to attend to patients’ needs. Conclusion: This study has demonstrated the benefit of using stroke journey map as a platform for stroke education. Overall, it has increased patients’ perceived knowledge in understanding their disease process, the management and treatment plans as well as the discharge process.

Keywords: acute stroke, education, ischemic stroke, knowledge, stroke

Procedia PDF Downloads 141
460 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 132
459 The Potential of On-Demand Shuttle Services to Reduce Private Car Use

Authors: B. Mack, K. Tampe-Mai, E. Diesch

Abstract:

Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.

Keywords: determinants of travel mode choice, on-demand shuttle services, private car use, public transport

Procedia PDF Downloads 156
458 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 180
457 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 31
456 The Effectiveness of Congressional Redistricting Commissions: A Comparative Approach Investigating the Ability of Commissions to Reduce Gerrymandering with the Wilcoxon Signed-Rank Test

Authors: Arvind Salem

Abstract:

Voters across the country are transferring the power of redistricting from the state legislatures to commissions to secure “fairer” districts by curbing the influence of gerrymandering on redistricting. Gerrymandering, intentionally drawing distorted districts to achieve political advantage, has become extremely prevalent, generating widespread voter dissatisfaction and resulting in states adopting commissions for redistricting. However, the efficacy of these commissions is dubious, with some arguing that they constitute a panacea for gerrymandering, while others contend that commissions have relatively little effect on gerrymandering. A result showing that commissions are effective would allay these fears, supplying ammunition for activists across the country to advocate for commissions in their state and reducing the influence of gerrymandering across the nation. However, a result against commissions may reaffirm doubts about commissions and pressure lawmakers to make improvements to commissions or even abandon the commission system entirely. Additionally, these commissions are publicly funded: so voters have a financial interest and responsibility to know if these commissions are effective. Currently, nine states place commissions in charge of redistricting, Arizona, California, Colorado, Michigan, Idaho, Montana, Washington, and New Jersey (Hawaii also has a commission but will be excluded for reasons mentioned later). This study compares the degree of gerrymandering in the 2022 election (“after”) to the election in which voters decided to adopt commissions (“before”). The before-election provides a valuable benchmark for assessing the efficacy of commissions since voters in those elections clearly found the districts to be unfair; therefore, comparing the current election to that one is a good way to determine if commissions have improved the situation. At the time Hawaii adopted commissions, it was merely a single at-large district, so it is before metrics could not be calculated, and it was excluded. This study will use three methods to quantify the degree of gerrymandering: the efficiency gap, the percentage of seats and the percentage of votes difference, and the mean-median difference. Each of these metrics has unique advantages and disadvantages, but together, they form a balanced approach to quantifying gerrymandering. The study uses a Wilcoxon Signed-Rank Test with a null hypothesis that the value of the metrics is greater than or equal to after the election than before and an alternative hypothesis that the value of these metrics is greater in the before the election than after using a 0.05 significance level and an expected difference of 0. Accepting the alternative hypothesis would constitute evidence that commissions reduce gerrymandering to a statistically significant degree. However, this study could not conclude that commissions are effective. The p values obtained for all three metrics (p=0.42 for the efficiency gap, p=0.94 for the percentage of seats and percentage of votes difference, and p=0.47 for the mean-median difference) were extremely high and far from the necessary value needed to conclude that commissions are effective. These results halt optimism about commissions and should spur serious discussion about the effectiveness of these commissions and ways to change them moving forward so that they can accomplish their goal of generating fairer districts.

Keywords: commissions, elections, gerrymandering, redistricting

Procedia PDF Downloads 54
455 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period

Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer

Abstract:

Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.

Keywords: lymphoedema, management strategies, pregnancy, qualitative

Procedia PDF Downloads 57
454 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process

Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan

Abstract:

The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.

Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation

Procedia PDF Downloads 241
453 Influence of Footing Offset over Stability of Geosynthetic Reinforced Soil Abutments with Variable Facing under Lateral Excitation

Authors: Ashutosh Verma, Satyendra MIttal

Abstract:

The loss of strength at the facing-reinforcement interface brought on by the seasonal thermal expansion/contraction of the bridge deck has been responsible for several geosynthetic reinforced soil abutment failures over the years. This results in excessive settlement below the bridge seat, which results in bridge bumps along the approach road and shortens abutment's design life. There are surely a wide variety of facing configurations available to designers when choosing the sort of facade. These layouts can generally be categorised into three groups: continuous, full height rigid (FHR) and modular (panels/block). The current work aims to experimentally explore the behavior of these three facing categories using 1g physical model testing under serviceable cyclic lateral displacements. With configurable facing arrangements to represent these three facing categories, a field instrumented GRS abutment prototype was modelled into a N scaled down 1g physical model (N = 5) to reproduce field behavior. Peak earth pressure coefficient (K) on the facing and vertical settlement of the footing (s/B) for footing offset (x/H) as 0.1, 0.2, 0.3, 0.4 and 0.5 at 100 cycles have been measured for cyclic lateral displacement of top of facing at loading rate of 1mm/min. Three types of cyclic displacements have been carried out to replicate active condition (CA), passive condition (CP), and active-passive condition (CAP) for each footing offset. The results demonstrated that a significant decrease in the earth pressure over the facing occurs when footing offset increases. It is worth noticing that the highest rate of increment in earth pressure and footing settlement were observed for each facing configuration at the nearest footing offset. Interestingly, for the farthest footing offset, similar responses of each facing type were observed, which indicates that the upon reaching a critical offset point presumably beyond the active region in the backfill, the lateral responses become independent of the stresses from the external footing load. Evidently, the footing load complements the stresses developed due to lateral excitation resulting in significant footing settlements for nearer footing offsets. The modular facing proved inefficient in resisting footing settlement due to significant buckling along the depth of facing. Instead of relative displacement along the depth of facing, continuous facing rotates around the base when it fails, especially for nearer footing offset causing significant depressions in the backfill area surrounding the footing. FHR facing, on the other hand, have been successful in confining the stresses in the soil domain itself reducing the footing settlement. It may be suitably concluded that increasing the footing offset may render stability to the GRS abutment with any facing configuration even for higher cycles of excitation.

Keywords: GRS abutments, 1g physical model, footing offset, cyclic lateral displacement

Procedia PDF Downloads 62
452 Resistance Training and Ginger Consumption on Cytokines Levels

Authors: Alireza Barari, Ahmad Abdi

Abstract:

Regular body trainings cause adaption in various system in body. One of the important effect of body training is its effect on immune system. It seems that cytokines usually release after long period exercises or some exercises which cause skeletal muscular damages. If some of the cytokines which cause responses such as inflammation of cells in skeletal muscles, with manipulating of training program, it can be avoided or limited from those exercises which induct cytokines release. Ginger plant is a kind of medicinal plants which is known as a anti inflammation plant. This plant is as most precedence medicinal plants in medicine science especially in inflammation cure. The aim of the present study was the effect of selected resistance training and consumption of ginger extract on IL-1α and TNFα untrained young women. The population includes young women interested in participating in the study with the average of 30±2 years old from Abbas Abad city among which 32 participants were chosen randomly and divided into 4 four groups, resistance training (R), resistance training and ginger consumption(RG), Ginger consumption(G)and Control group(C). The training groups performed circuit resistance training at the intensity of 65-75% one repeat maximum, 3 days a week for 6 weeks. Besides resistance training, subjects were given either ginseng (5 mg/kg per day) or placebo. Prior to and 48 hours after interventions body composition was measured and blood samples were taken in order to assess serum levels of IL-1α and TNFα. Plasma levels of cytokines were measured with commercially available ELISA Kits.IL-1α kit and TNFα kit were used in this research. To demonstrate the effectiveness of the independent variable and the comparison between groups, t-test and ANOVA were used. To determine differences between the groups, the Scheffe test was used that showed significant changes in any of the variables. we observed that circuit resistance training in R and RG groups can significant decreased in weight and body mass index in untrained females (p<0.05). The results showed a significant decreased in the mean level of IL-1α levels before and after the training period in G group (p=0.046) and RG group (p=0.022). Comparison between groups also showed there was significant difference between groups R-RG and RG-C. Intergroup comparison results showed that the mean levels of TNFα before and after the training in group G (p=0.044) and RG (p=0.037), significantly decreased. Comparison between groups also showed there was significant difference between groups R–RG , R-G ,RG-C and G-C. The research shows that circuit resistance training with reducing overload method results in systemic inflammation had significant effect on IL-1α levels and TNFα. Of course, Ginger can counteract the negative effects of resistance training exercise on immune function and stability of the mast cell membrane. Considerable evidence supported the anti-inflammatory properties of ginger for several constituents, especially gingerols, shogaols, paradols, and zingerones, through decreased cytokine gene TNF α and IL-1Α expression and inhibition of cyclooxygenase 1 and 2. These established biological actions suggest that ingested ginger could block the increase in IL-1α.

Keywords: resistance training, ginger, IL-1α , TNFα

Procedia PDF Downloads 405
451 The M Health Paradigm for the Chronic Care Management of Obesity: New Opportunities in Clinical Psychology and Medicine

Authors: Gianluca Castelnuovo, Gian Mauro Manzoni, Giada Pietrabissa, Stefania Corti, Emanuele Giusti, Roberto Cattivelli, Enrico Molinari, Susan Simpson

Abstract:

Obesity is currently an important public health problem of epidemic proportions (globesity). Moreover Binge Eating Disorder (BED) is typically connected with obesity, even if not occurring exclusively in conjunction with overweight conditions. Typically obesity with BED requires a longer term treatment in comparison with simple obesity. Rehabilitation interventions that aim at improving weight-loss, reducing obesity-related complications and changing dysfunctional behaviors, should ideally be carried out in a multidisciplinary context with a clinical team composed of psychologists, dieticians, psychiatrists, endocrinologists, nutritionists, physiotherapists, etc. Long-term outpatient multidisciplinary treatments are likely to constitute an essential aspect of rehabilitation, due to the growing costs of a limited inpatient approach. Internet-based technologies can improve long-term obesity rehabilitation within a collaborative approach. The new m health (m-health, mobile health) paradigm, defined as clinical practices supported by up to date mobile communication devices, could increase compliance- engagement and contribute to a significant cost reduction in BED and obesity rehabilitation. Five psychological components need to be considered for successful m Health-based obesity rehabilitation in order to facilitate weight-loss.1) Self-monitoring. Portable body monitors, pedometers and smartphones are mobile and, therefore, can be easily used, resulting in continuous self-monitoring. 2) Counselor feedback and communication. A functional approach is to provide online weight-loss interventions with brief weekly or monthly counselor or psychologist visits. 3) Social support. A group treatment format is typically preferred for behavioral weight-loss interventions. 4) Structured program. Technology-based weight-loss programs incorporate principles of behavior therapy and change with structured weekly protocolos including nutrition, exercise, stimulus control, self-regulation strategies, goal-setting. 5) Individually tailored program. Interventions specifically designed around individual’s goals typically record higher rates of adherence and weight loss. Opportunities and limitations of m health approach in clinical psychology for obesity and BED are discussed, taking into account future research directions in this promising area.

Keywords: obesity, rehabilitation, out-patient, new technologies, tele medicine, tele care, m health, clinical psychology, psychotherapy, chronic care management

Procedia PDF Downloads 449
450 Reaching Students Who “Don’t Like Writing” through Scenario Based Learning

Authors: Shahira Mahmoud Yacout

Abstract:

Writing is an essential skill in many vocational, academic environments, and notably workplaces, yet many students perceive writing as being something tiring and boring or maybe a “waste of time”. Studies in the field of foreign languages related this fact might be due to the lack of connection between what is learned in the university and what students come to encounter in real life situations”. Arabic learners felt they needed more language exposure to the context of their future professions. With this idea in mind, Scenario based learning (SBL) is reported to be an educational approach to motivate, engage and stimulate students’ interest and to achieve the desired writing learning outcomes. In addition, researchers suggested Scenario based learning (SBL)as an instructional approach that develops and enhances students skills through developing higher order thinking skills and active learning. It is a subset of problem-based learning and case-based learning. The approach focuses on authentic rhetorical framing reflecting writing tasks in real life situations. It works successfully when used to simulate real-world practices, providing context that reflects the types of situations professionals respond to in writing. It was claimed that using realistic scenarios customized to the course’s learning objectives as it bridged the gap for students between theory and application. Within this context, it is thought that scenario-based learning is an important approach to enhance the learners’ writing skills and to reflect meaningful learning within authentic contexts. As an Arabicforeign language instructor, it was noticed that students find difficulties in adapting writing styles to authentic writing contexts and addressing different audiences and purposes. This idea is supported by studieswho claimed that AFL students faced difficulties with transferring writing skills to situations outside of the classroom context. In addition, it was observed that some of the Arabic textbooks for teaching Arabic as a foreign language lacked topics that initiated higher order thinking skills and stimulated the learners to understand the setting, and created messages appropriate to different audiences, context, and purposes. The goals of this study are to 1)provide a rational for using scenario-based learning approach to improveAFL learners in writing skills, 2) demonstrate how to design/ implement a scenario-based learning technique aligned with the writing course objectives,3) demonstrate samples of scenario-based approach implemented in AFL writing class, and 4)emphasis the role of peer-review along with the instructor’s feedback, in the process of developing the writing skill. Finally, this presentation highlighted and emphasized the importance of using the scenario-based learning approach in writing as a means to mirror students’ real-life situations and engage them in planning, monitoring, and problem solving. This approach helped in making writing an enjoyable experience and clearly useful to students’ future professional careers.

Keywords: meaningful learning, real life contexts, scenario based learning, writing skill

Procedia PDF Downloads 78
449 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals

Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze

Abstract:

Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.

Keywords: extraction, green technologies, natural resources, flavonoids

Procedia PDF Downloads 109
448 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 8
447 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 127
446 Research Insights into Making the Premises Spiritually Pure

Authors: Jayant Athavale, Rendy Ekarantio, Sean Clarke

Abstract:

The Maharshi University of Spirituality was founded on the base of 30 years of spiritual research. It specializes in conducting research on how the subtle-world and spiritual-vibrations affect the lives of people. One such area of research is how to create spiritually positive vibrations in the premises. By using aura and energy scanners along with the sixth sense, the spiritual research team has identified 3 aspects that are instrumental in enhancing or reducing the spiritual positivity of any premises. Firstly, the characteristics of the land should be considered holistically, that is, from a physical, psychological and spiritual point of view. While procedures for the physical assessment of land are well documented, due to ignorance and disbelief, the spiritual aspects are not considered. For example, if the land was previously a graveyard site, it can have highly detrimental effects on the residents within the premises at the spiritual level. This can further manifest as physical and psychological problems that are faced by the residents. Secondly, the manner of construction and the purpose/use of the building affects the subtle-vibrations in the premises. The manner of construction includes gross aspects such as the materials used, kind of architecture, etc. It also includes the subtle aspects provided in detail in the ancient science of Vastu Shastra and Feng Shui. For example, having the front door of the premises facing the south direction can negatively affect the premises because the southern direction is prone to distressing vibrations. The purpose and use of the premises also plays an important role in determining the type of subtle-vibrations that will be predominantly found within its area. Thirdly, the actions, thoughts, value systems and attitudes of the residents play an important part in determining whether the subtle-vibrations will be positive or negative. Residents with many personality defects emit negative vibrations. If some of the residents are affected with negative energies and are not doing any spiritual practice to overcome it, then it can have a harmful spiritual effect on the rest of the residents and the premises. If these three aspects are appropriately considered and attended to, then the premises will generate higher levels of spiritually positive vibrations. Both living and non-living objects within the premises imbibe this positivity and therefore, it holistically enhances the overall well-being of its residents. The positivity experienced in the premises of the Spiritual Research Centre of the Maharshi University of Spirituality, is a testimony to the success of this research. Due to regular and intense spiritual practice carried out by 10 Saints and over 500 seekers residing in its premises, the positivity in the environment can be felt by people when they enter its premises and even from a distance, and can easily be picked up by aura and energy scanners. Extraordinary and fascinating phenomena are observed and experienced in its premises as both living and non-living objects emit spiritually positive vibrations. This also protects the residents from negative vibrations. Examples of such phenomena and their positive impact are discussed in the paper.

Keywords: negative energies, positive vibrations on the premises, resident’s spiritual practice, science of the premises

Procedia PDF Downloads 129
445 Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior

Authors: Mohammad Ehsani, Iman Zarei, Soudabeh Moazemigoudarzi

Abstract:

The aim of this study is to determine Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior. According to many researchers nature-based recreation activities play a significant role in the tourism industry and have provided myriad opportunities for the protection of natural areas. It is essential to investigate individuals' behavior during such activities to avoid further damage to precious and dwindling natural resources. This study develops a robust model that provides a comprehensive understanding of the formation of pro-environmental behavioral intentions among climbers of Mount Damavand National Park in Iran. To this end, we combined the theory of planned behavior (TPB), value-belief-norm theory (VBN), and a hierarchical model of leisure constraints to predict individuals’ pro-environmental hiking behavior during outdoor recreation. It was used structural equation modeling to test the theoretical framework. A sample of 787 climbers was analyzed. Among the theory of planned behavior variables, perceived behavioral control showed the strongest association with behavioral intention (β = .57). This relationship indicates that if people feel they can have fewer negative impacts on national resources while hiking, it will result in more environmentally acceptable behavior. Subjective norms had a moderate positive impact on behavioral intention, indicating the importance of other people on the individual's behavior. Attitude had a small positive effect on intention. Ecological worldview positively influenced attitude and personal belief. Personal belief (awareness of consequences and ascribed responsibility) showed a positive association with TPB variables. Although the data showed a high average score in awareness of consequences (mean = 4.219 out of 5), evidence from Damavand Mount shows that there are many environmental issues that need addressing (e.g., vast amounts of garbage). National park managers need to make sure that their solutions result in awareness about proenvironmental behavior (PEB). Findings showed that negative relationship between constraints and all TPB predictors. Providing proper restrooms and parking spaces in campgrounds, strategies controlling limiting capacity and solutions for removing waste from high altitudes are helpful to decrease the negative impact of structural constraints. In order to address intrapersonal constraints, managers should provide opportunities to interest individuals in environmental activities, such as environmental celebrations or making documentaries about environmental issues. Moreover, promoting a culture of environmental protection in the Damavand Mount area would reduce interpersonal constraints. Overall, the proposed model improved the explanatory power of the TPB by predicting 64.7% of intention compared to the original TPB that accounted for 63.8% of the variance in intention.

Keywords: theory of planned behavior, pro-environmental behavior, national park, constraints

Procedia PDF Downloads 74
444 The Effect of Rice Husk Ash on the Mechanical and Durability Properties of Concrete

Authors: Binyamien Rasoul

Abstract:

Portland cement is one of the most widely used construction materials in the world today; however, manufacture of ordinary Portland cement (OPC) emission significant amount of CO2 resulting environmental impact. On the other hand, rice husk ash (RHA), which is produce as by product material is generally considered to be an environmental issue as a waste material. This material (RHA) consists of non-crystalline silicon dioxide with high specific surface area and high pozzolanic reactivity. These RHA properties can demonstrate a significant influence in improving the mechanical and durability properties of mortar and concrete. Furthermore, rice husk ash can provide a cost effective and give concrete more sustainability. In this paper, chemical composition, reactive silica and fineness effect was assessed by examining five different types of RHA. Mortars and concrete specimens were molded with 5% to 50% of ash, replacing the Portland cement, and measured their compressive and tensile strength behavior. Beyond it, another two parameters had been considered: the durability of concrete blended RHA, and effect of temperature on the transformed of amorphous structure to crystalline form. To obtain the rice husk ash properties, these different types were subjected to X-Ray fluorescence to determine the chemical composition, while pozzolanic activity obtained by using X-Ray diffraction test. On the other hand, finesses and specific surface area were obtained by used Malvern Mastersizer 2000 test. The measured parameters properties of fresh mortar and concrete obtained by used flow table and slump test. While, for hardened mortar and concrete the compressive and tensile strength determined pulse the chloride ions penetration for concrete using NT Build 492 (Nord Test) – non-steady state migration test (RMT Test). The obtained test results indicated that RHA can be used as a cement replacement material in concrete with considerable proportion up to 50% percentages without compromising concrete strength. The use of RHA in the concrete as blending materials improved the different characteristics of the concrete product. The paper concludes that to exhibits a good compressive strength of OPC mortar or concrete with increase RHA replacement ratio rice husk ash should be consist of high silica content with high pozzolanic activity. Furthermore, with high amount of carbon content (12%) could be improve the strength of concrete when the silica structure is totally amorphous. As well RHA with high amount of crystalline form (25%) can be used as cement replacement when the silica content over 90%. The workability and strength of concrete increased by used of superplasticizer and it depends on the silica structure and carbon content. This study therefore is an investigation of the effect of partially replacing Ordinary Portland cement (OPC) with Rice hush Ash (RHA) on the mechanical properties and durability of concrete. This paper gives satisfactory results to use RHA in sustainable construction in order to reduce the carbon footprint associated with cement industry.

Keywords: OPC, ordinary Portland cement, RHA rice husk ash, W/B water to binder ratio, CO2, carbon dioxide

Procedia PDF Downloads 169
443 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 167
442 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach

Authors: Sina Kazemi, Farshid Torabi, Todd Peterson

Abstract:

Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.

Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity

Procedia PDF Downloads 66
441 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 114
440 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project

Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba

Abstract:

Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.

Keywords: architecture, interdisciplinary, research, studio, students, wood

Procedia PDF Downloads 290
439 Factors Affecting the Success of Premarital Screening Services in Middle Eastern Countries

Authors: Wafa Al Jabri

Abstract:

Background: In Middle Eastern Countries (MECs), there is a high prevalence of genetic blood disorders (GBDs), particularly sickle cell disease and thalassemia. The GBDs are considered a major public health concern that place a huge burden to individuals, families, communities, and health care systems. The high rates of consanguineous marriages, along with the unacceptable termination of at-risk pregnancy in MECs, reduce the possible solutions to control the high prevalence of GBDs. Since the early 1970s, most of MECs have started introducing premarital screening services (PSS) as a preventive measure to identify the asymptomatic carriers of GBDs and to provide genetic counseling to help couples plan for healthy families; yet, the success rate of PSS is very low. Purpose: This paper aims to highlight the factors that affect the success of PSS in MECs. Methods: An integrative review of articles located in CINAHL, PubMed, SCOPUS, and MedLine was carried out using the following terms: “premarital screening,” “success,” “effectiveness,” and “ genetic blood disorders”. Second, a hand search of the reference lists and Google searches were conducted to find studies that did not exist in the primary database searches. Only studies which are conducted in MECs and published after 2010 were included. Studies that were not published in English were excluded. Results: Eighteen articles were included in the review. The results showed that PSS in most of the MECs was successful in achieving its objective of identifying high-risk marriages; however, the service failed to meet its ultimate goal of reducing the prevalence of GBDs. Various factors seem to hinder the success of PSS, including poor public awareness, late timing of the screening, culture and social stigma, lack of prenatal diagnosis services and therapeutic abortion, emotional factors, religious beliefs, and lack of genetic counseling services. However, poor public awareness, late timing of the screening, religious misbeliefs, and the lack of adequate counseling services were the most common barriers identified. Conclusion and Implications: The review help in providing a framework for an effective preventive measure to reduce the prevalence of GBDs in MECS. This framework focuses primarily in overcoming the identified barriers by providing effective health education programs in collaboration with religious leaders, offering the screening test to young adults at an earlier stage, and tailoring the genetic counseling to consider people’s values, beliefs, and preferences.

Keywords: premarital screening, middle east, genetic blood disorders, factors

Procedia PDF Downloads 62
438 The Use of Platelet-rich Plasma in the Treatment of Diabetic Foot Ulcers: A Scoping Review

Authors: Kiran Sharma, Viktor Kunder, Zerha Rizvi, Ricardo Soubelet

Abstract:

Platelet rich plasma (PRP) has been recognized as a method of treatment in medicine since the 1980s. It primarily functions by releasing cytokines and growth factors that promote wound healing; these growth promoting factors released by PRP enact new processes such as angiogenesis, collagen deposition, and tissue formation that can change wound healing outcomes. Many studies recognize that PRP aids in chronic wound healing, which is advantageous for patients who suffer from chronic diabetic foot ulcers (DFUs). This scoping review aims to examine literature to identify the efficacy of PRP use in the healing of DFUs. Following PRISMA guidelines, we searched randomized-controlled trials involving PRP use in diabetic patients with foot ulcers using PubMed, Medline, CINAHL Complete, and Cochrane Database of Systematic Reviews. We restricted the search to articles published during 2005-2022, full texts in the English language, articles involving patients aged 19 years or older, articles that used PRP on specifically DFUs, articles that included a control group, articles on human subjects. The initial search yielded 119 articles after removing duplicates. Final analysis for relevance yielded 8 articles. In all cases except one, the PRP group showed either faster healing, more complete healing, or a larger percentage of healed participants. There were no situations in the included studies where the control group had a higher rate of healing or decreased wound size as compared to a group with isolated PRP-only use. Only one study did not show conclusive evidence that PRP caused accelerated healing in DFUs, and this study did not have an isolated PRP variable group. Application styles of PRP for treatment were shown to influence the level of healing in patients, with injected PRP appearing to achieve the best results as compared to topical PRP application. However, this was not conclusive due to the involvement of several other variables. Two studies additionally found PRP to be useful in healing refractory DFUs, and one study found that PRP use in patients with additional comorbidities was still more effective in healing DFUs than the standard control groups. The findings of this review suggest that PRP is a useful tool in reducing healing times and improving rates of complete wound healing in DFUs. There is room for further research in the application styles of PRP before conclusive statements can be made on the efficacy of injected versus topical PRP healing based on the findings in this study. The results of this review provide a baseline for further research in PRP use in diabetic patients and can be used by both physicians and public health experts to guide future treatment options for DFUs.

Keywords: diabetic foot ulcer, DFU, platelet rich plasma, PRP

Procedia PDF Downloads 54
437 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 378
436 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 219
435 Vortex Flows under Effects of Buoyant-Thermocapillary Convection

Authors: Malika Imoula, Rachid Saci, Renee Gatignol

Abstract:

A numerical investigation is carried out to analyze vortex flows in a free surface cylinder, driven by the independent rotation and differentially heated boundaries. As a basic uncontrolled isothermal flow, we consider configurations which exhibit steady axisymmetric toroidal type vortices which occur at the free surface; under given rates of the bottom disk uniform rotation and for selected aspect ratios of the enclosure. In the isothermal case, we show that sidewall differential rotation constitutes an effective kinematic means of flow control: the reverse flow regions may be suppressed under very weak co-rotation rates, while an enhancement of the vortex patterns is remarked under weak counter-rotation. However, in this latter case, high rates of counter-rotation reduce considerably the strength of the meridian flow and cause its confinement to a narrow layer on the bottom disk, while the remaining bulk flow is diffusion dominated and controlled by the sidewall rotation. The main control parameters in this case are the rotational Reynolds number, the cavity aspect ratio and the rotation rate ratio defined. Then, the study proceeded to consider the sensitivity of the vortex pattern, within the Boussinesq approximation, to a small temperature gradient set between the ambient fluid and an axial thin rod mounted on the cavity axis. Two additional parameters are introduced; namely, the Richardson number Ri and the Marangoni number Ma (or the thermocapillary Reynolds number). Results revealed that reducing the rod length induces the formation of on-axis bubbles instead of toroidal structures. Besides, the stagnation characteristics are significantly altered under the combined effects of buoyant-thermocapillary convection. Buoyancy, induced under sufficiently high Ri, was shown to predominate over the thermocapillay motion; causing the enhancement (suppression) of breakdown when the rod is warmer (cooler) than the ambient fluid. However, over small ranges of Ri, the sensitivity of the flow to surface tension gradients was clearly evidenced and results showed its full control over the occurrence and location of breakdown. In particular, detailed timewise evolution of the flow indicated that weak thermocapillary motion was sufficient to prevent the formation of toroidal patterns. These latter detach from the surface and undergo considerable size reduction while moving towards the bulk flow before vanishing. Further calculations revealed that the pattern reappears with increasing time as steady bubble type on the rod. However, in the absence of the central rod and also in the case of small rod length l, the flow evolved into steady state without any breakdown.

Keywords: buoyancy, cylinder, surface tension, toroidal vortex

Procedia PDF Downloads 334
434 Composition and Catalytic Behaviour of Biogenic Iron Containing Materials Obtained by Leptothrix Bacteria Cultivation in Different Growth Media

Authors: M. Shopska, D. Paneva, G. Kadinov, Z. Cherkezova-Zheleva, I. Mitov

Abstract:

The iron containing materials are used as catalysts in different processes. The chemical methods of their synthesis use toxic and expensive chemicals; sophisticated devices; energy consumption processes that raise their cost. Besides, dangerous waste products are formed. At present time such syntheses are out of date and wasteless technologies are indispensable. The bioinspired technologies are consistent with the ecological requirements. Different microorganisms participate in the biomineralization of the iron and some phytochemicals are involved, too. The methods for biogenic production of iron containing materials are clean, simple, nontoxic, realized at ambient temperature and pressure, cheaper. The biogenic iron materials embrace different iron compounds. Due to their origin these substances are nanosized, amorphous or poorly crystalline, porous and have number of useful properties like SPM, high magnetism, low toxicity, biocompatibility, absorption of microwaves, high surface area/volume ratio, active sites on the surface with unusual coordination that distinguish them from the bulk materials. The biogenic iron materials are applied in the heterogeneous catalysis in different roles - precursor, active component, support, immobilizer. The application of biogenic iron oxide materials gives rise to increased catalytic activity in comparison with those of abiotic origin. In our study we investigated the catalytic behavior of biomasses obtained by cultivation of Leptothrix bacteria in three nutrition media – Adler, Fedorov, and Lieske. The biomass composition was studied by Moessbauer spectroscopy and transmission IRS. Catalytic experiments on CO oxidation were carried out using in situ DRIFTS. Our results showed that: i) the used biomasses contain α-FeOOH, γ-FeOOH, γ-Fe2O3 in different ratios; ii) the biomass formed in Adler medium contains γ-FeOOH as main phase. The CO conversion was about 50% as evaluated by decreased integrated band intensity in the gas mixture spectra during the reaction. The main phase in the spent sample is γ-Fe2O3; iii) the biomass formed in Lieske medium contains α-FeOOH. The CO conversion was about 20%. The main phase in the spent sample is α-Fe2O3; iv) the biomass formed in Fedorov medium contains γ-Fe2O3 as main phase. CO conversion in the test reaction was about 19%. The results showed that the catalytic activity up to 200°C resulted predominantly from α-FeOOH and γ-FeOOH. The catalytic activity at temperatures higher than 200°C was due to the formation of γ-Fe2O3. The oxyhydroxides, which are the principal compounds in the biomass, have low catalytic activity in the used reaction; the maghemite has relatively good catalytic activity; the hematite has activity commensurate with that of the oxyhydroxides. Moreover it can be affirmed that catalytic activity is inherent in maghemite, which is obtained by transformation of the biogenic lepidocrocite, i.e. it has biogenic precursor.

Keywords: nanosized biogenic iron compounds, catalytic behavior in reaction of CO oxidation, in situ DRIFTS, Moessbauer spectroscopy

Procedia PDF Downloads 353