Search results for: construction mechanism
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6876

Search results for: construction mechanism

546 Analyzing Water Waves in Underground Pumped Storage Reservoirs: A Combined 3D Numerical and Experimental Approach

Authors: Elena Pummer, Holger Schuettrumpf

Abstract:

By today underground pumped storage plants as an outstanding alternative for classical pumped storage plants do not exist. They are needed to ensure the required balance between production and demand of energy. As a short to medium term storage pumped storage plants have been used economically over a long period of time, but their expansion is limited locally. The reasons are in particular the required topography and the extensive human land use. Through the use of underground reservoirs instead of surface lakes expansion options could be increased. Fulfilling the same functions, several hydrodynamic processes result in the specific design of the underground reservoirs and must be implemented in the planning process of such systems. A combined 3D numerical and experimental approach leads to currently unknown results about the occurring wave types and their behavior in dependence of different design and operating criteria. For the 3D numerical simulations, OpenFOAM was used and combined with an experimental approach in the laboratory of the Institute of Hydraulic Engineering and Water Resources Management at RWTH Aachen University, Germany. Using the finite-volume method and an explicit time discretization, a RANS-Simulation (k-ε) has been run. Convergence analyses for different time discretization, different meshes etc. and clear comparisons between both approaches lead to the result, that the numerical and experimental models can be combined and used as hybrid model. Undular bores partly with secondary waves and breaking bores occurred in the underground reservoir. Different water levels and discharges change the global effects, defined as the time-dependent average of the water level as well as the local processes, defined as the single, local hydrodynamic processes (water waves). Design criteria, like branches, directional changes, changes in cross-section or bottom slope, as well as changes in roughness have a great effect on the local processes, the global effects remain unaffected. Design calculations for underground pumped storage plants were developed on the basis of existing formulae and the results of the hybrid approach. Using the design calculations reservoirs heights as well as oscillation periods can be determined and lead to the knowledge of construction and operation possibilities of the plants. Consequently, future plants can be hydraulically optimized applying the design calculations on the local boundary conditions.

Keywords: energy storage, experimental approach, hybrid approach, undular and breaking Bores, 3D numerical approach

Procedia PDF Downloads 213
545 Media Representations of Gender-Intersectional Analysis of Impact/Influence on Collective Consciousness and Perceptions of Feminism, Gender, and Gender Equality: Evidence from Cultural/Media Sources in Nigeria

Authors: Olatawura O. Ladipo-Ajayi

Abstract:

The concept of gender equality is not new, nor are the efforts and movements toward achieving this concept. The idea of gender equality originates from the early feminist movements of the 1880s and its subsequent waves, all fighting to promote gender rights and equality focused on varying aspects and groups. Nonetheless, the progress and achievement of gender equality are not progressing at similar rates across the world and groups. This uneven progress is often due to varying social, cultural, political, and economic factors- some of which underpin intersectional identities and influence the perceptions of gender and associated gender roles that create gender inequality. In assessing perceptions of gender and assigned roles or expectations that cause inequalities, intersectionality provides a framework to interrogate how these perceptions are molded and reinforced to create marginalization. Intersectionality is increasingly becoming a lens and approach to understanding better inequalities and oppression, gender rights and equality, the challenges towards its achievement, and how best to move forward in the fight for gender rights, inclusion, and equality. In light of this, this paper looks at intersectional representations of gender in the media within cultural/social contexts -particularly entertainment media- and how this influences perceptions of gender and impacts progress toward achieving gender equality and advocacy. Furthermore, the paper explores how various identities and, to an extent, personal experiences play a role in the perceptions of and representations of gender, as well as influence the development of policies that promote gender equality in general. Finally, the paper applies qualitative and auto-ethnographic research methods building on intersectional and social construction frameworks to analyze gender representation in media using a literature review of scholarly works, news items, and cultural/social sources like Nigerian movies. It concludes that media influences ideas and perceptions of gender, gender equality, and rights; there isn’t enough being done in the media in the global south in general to challenge the hegemonic patriarchal and binary concepts of gender. As such, the growth of feminism and the attainment of gender equality is slow, and the concepts are often misunderstood. There is a need to leverage media outlets to influence perceptions and start informed conversations on gender equality and feminism; build collective consciousness locally to improve advocacy for equal gender rights. Changing the gender narrative in everyday media, including entertainment media, is one way to influence public perceptions of gender, promote the concept of gender equality, and advocate for policies that support equality.

Keywords: gender equality, gender roles/socialization, intersectionality, representation of gender in media

Procedia PDF Downloads 106
544 Numerical Modeling and Experimental Analysis of a Pallet Isolation Device to Protect Selective Type Industrial Storage Racks

Authors: Marcelo Sanhueza Cartes, Nelson Maureira Carsalade

Abstract:

This research evaluates the effectiveness of a pallet isolation device for the protection of selective-type industrial storage racks. The device works only in the longitudinal direction of the aisle, and it is made up of a platform installed on the rack beams. At both ends, the platform is connected to the rack structure by means of a spring-damper system working in parallel. A system of wheels is arranged between the isolation platform and the rack beams in order to reduce friction, decoupling of the movement and improve the effectiveness of the device. The latter is evaluated by the reduction of the maximum dynamic responses of basal shear load and story drift in relation to those corresponding to the same rack with the traditional construction system. In the first stage, numerical simulations of industrial storage racks were carried out with and without the pallet isolation device. The numerical results allowed us to identify the archetypes in which it would be more appropriate to carry out experimental tests, thus limiting the number of trials. In the second stage, experimental tests were carried out on a shaking table to a select group of full-scale racks with and without the proposed device. The movement simulated by the shaking table was based on the Mw 8.8 magnitude earthquake of February 27, 2010, in Chile, registered at the San Pedro de la Paz station. The peak ground acceleration (PGA) was scaled in the frequency domain to fit its response spectrum with the design spectrum of NCh433. The experimental setup contemplates the installation of sensors to measure relative displacement and absolute acceleration. The movement of the shaking table with respect to the ground, the inter-story drift of the rack and the pallets with respect to the rack structure were recorded. Accelerometers redundantly measured all of the above in order to corroborate measurements and adequately capture low and high-frequency vibrations, whereas displacement and acceleration sensors are respectively more reliable. The numerical and experimental results allowed us to identify that the pallet isolation period is the variable with the greatest influence on the dynamic responses considered. It was also possible to identify that the proposed device significantly reduces both the basal cut and the maximum inter-story drift by up to one order of magnitude.

Keywords: pallet isolation system, industrial storage racks, basal shear load, interstory drift.

Procedia PDF Downloads 73
543 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars

Authors: Masauso Moses Phiri

Abstract:

Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.

Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic

Procedia PDF Downloads 153
542 Investigating English Dominance in a Chinese-English Dual Language Program: Teachers' Language Use and Investment

Authors: Peizhu Liu

Abstract:

Dual language education, also known as immersion education, differs from traditional language programs that teach a second or foreign language as a subject. Instead, dual language programs adopt a content-based approach, using both a majority language (e.g., English, in the case of the United States) and a minority language (e.g., Spanish or Chinese) as a medium of instruction to teach math, science, and social studies. By granting each language of instruction equal status, dual language education seeks to educate not only meaningfully but equitably and to foster tolerance and appreciation of diversity, making it essential for immigrants, refugees, indigenous peoples, and other marginalized students. Despite the cognitive and academic benefits of dual language education, recent literature has revealed that English is disproportionately privileged across dual language programs. Scholars have expressed concerns about the unbalanced status of majority and minority languages in dual language education, as favoring English in this context may inadvertently reaffirm its dominance and moreover fail to serve the needs of children whose primary language is not English. Through a year-long study of a Chinese-English dual language program, the extensively disproportionate use of English has also been observed by the researcher. However, despite the fact that Chinese-English dual language programs are the second-most popular program type after Spanish in the United States, this issue remains underexplored in the existing literature on Chinese-English dual language education. In fact, the number of Chinese-English dual language programs being offered in the U.S. has grown rapidly, from 8 in 1988 to 331 as of 2023. Using Norton and Darvin's investment model theory, the current study investigates teachers' language use and investment in teaching Chinese and English in a Chinese-English dual language program at an urban public school in New York City. The program caters to a significant number of minority children from working-class families. Adopting an ethnographic and discourse analytic approach, this study seeks to understand language use dynamics in the program and how micro- and macro-factors, such as students' identity construction, parents' and teachers' language ideologies, and the capital associated with each language, influence teachers' investment in teaching Chinese and English. The research will help educators and policymakers understand the obstacles that stand in the way of the goal of dual language education—that is, the creation of a more inclusive classroom, which is achieved by regarding both languages of instruction as equally valuable resources. The implications for how to balance the use of the majority and minority languages will also be discussed.

Keywords: dual language education, bilingual education, language immersion education, content-based language teaching

Procedia PDF Downloads 85
541 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study

Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker

Abstract:

Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.

Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation

Procedia PDF Downloads 138
540 Regularized Euler Equations for Incompressible Two-Phase Flow Simulations

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique for the incompressible two-phase flow simulations. This technique is known as observable method due to the understanding of observability that any feature smaller than the actual resolution (physical or numerical), i.e., the size of wire in hotwire anemometry or the grid size in numerical simulations, is not able to be captured or observed. Differ from most regularization techniques that applies on the numerical discretization, the observable method is employed at PDE level during the derivation of equations. Difficulties in the simulation and analysis of realistic fluid flow often result from discontinuities (or near-discontinuities) in the calculated fluid properties or state. Accurately capturing these discontinuities is especially crucial when simulating flows involving shocks, turbulence or sharp interfaces. Over the past several years, the properties of this new regularization technique have been investigated that show the capability of simultaneously regularizing shocks and turbulence. The observable method has been performed on the direct numerical simulations of shocks and turbulence where the discontinuities are successfully regularized and flow features are well captured. In the current paper, the observable method will be extended to two-phase interfacial flows. Multiphase flows share the similar features with shocks and turbulence that is the nonlinear irregularity caused by the nonlinear terms in the governing equations, namely, Euler equations. In the direct numerical simulation of two-phase flows, the interfaces are usually treated as the smooth transition of the properties from one fluid phase to the other. However, in high Reynolds number or low viscosity flows, the nonlinear terms will generate smaller scales which will sharpen the interface, causing discontinuities. Many numerical methods for two-phase flows fail at high Reynolds number case while some others depend on the numerical diffusion from spatial discretization. The observable method regularizes this nonlinear mechanism by filtering the convective terms and this process is inviscid. The filtering effect is controlled by an observable scale which is usually about a grid length. Single rising bubble and Rayleigh-Taylor instability are studied, in particular, to examine the performance of the observable method. A pseudo-spectral method is used for spatial discretization which will not introduce numerical diffusion, and a Total Variation Diminishing (TVD) Runge Kutta method is applied for time integration. The observable incompressible Euler equations are solved for these two problems. In rising bubble problem, the terminal velocity and shape of the bubble are particularly examined and compared with experiments and other numerical results. In the Rayleigh-Taylor instability, the shape of the interface are studied for different observable scale and the spike and bubble velocities, as well as positions (under a proper observable scale), are compared with other simulation results. The results indicate that this regularization technique can potentially regularize the sharp interface in the two-phase flow simulations

Keywords: Euler equations, incompressible flow simulation, inviscid regularization technique, two-phase flow

Procedia PDF Downloads 502
539 Sugar-Induced Stabilization Effect of Protein Structure

Authors: Mitsuhiro Hirai, Satoshi Ajito, Nobutaka Shimizu, Noriyuki Igarashi, Hiroki Iwase, Shinichi Takata

Abstract:

Sugars and polyols are known to be bioprotectants preventing such as protein denaturation and enzyme deactivation and widely used as a nontoxic additive in various industrial and medical products. The mechanism of their protective actions has been explained by specific bindings between biological components and additives, changes in solvent viscosities, and surface tension and free energy changes upon transfer of those components into additive solutions. On the other hand, some organisms having tolerances against extreme environment produce stress proteins and/or accumulate sugars in cells, which is called cryptobiosis. In particular, trehalose has been drawing attention relevant to cryptobiosis under external stress such as high or low temperature, drying, osmotic pressure, and so on. The function of cryptobiosis by trehalose has been explained relevant to the restriction of the intra-and/or-inter-molecular movement by vitrification or from the replacement of water molecule by trehalose. Previous results suggest that the structure and interaction between sugar and water are a key determinant for understanding cryptobiosis. Recently, we have shown direct evidence that the protein hydration (solvation) and structural stability against chemical and thermal denaturation significantly depend on sugar species and glycerol. Sugar and glycerol molecules tend to be preferentially or weakly excluded from the protein surface and preserved the native protein hydration shell. Due to the protective action of the protein hydration shell by those molecules, the protein structure is stabilized against chemical (guanidinium chloride) and thermal denaturation. The protective action depends on sugar species. To understand the above trend and difference in detail, it is essentially important to clarify the characteristics of solutions containing those additives. In this study, by using wide-angle X-ray scattering technique covering a wide spatial region (~3-120 Å), we have clarified structures of sugar solutions with the concentration from 5% w/w to 65% w/w. The sugars measured in the present study were monosaccharides (glucose, fructose, mannose) and disaccharides (sucrose, trehalose, maltose). Due to observed scattering data with a wide spatial resolution, we have succeeded in obtaining information on the internal structure of individual sugar molecules and on the correlation between them. Every sugar gradually shortened the average inter-molecular distance as the concentration increased. The inter-molecular interaction between sugar molecules was essentially showed an exclusive tendency for every sugar, which appeared as the presence of a repulsive correlation hole. This trend was more weakly seen for trehalose compared to other sugars. The intermolecular distance and spread of individual molecule clearly showed the dependence of sugar species. We will discuss the relation between the characteristic of sugar solution and its protective action of biological materials.

Keywords: hydration, protein, sugar, X-ray scattering

Procedia PDF Downloads 156
538 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear

Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho

Abstract:

The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.

Keywords: prestressed hollow core slabs, shear, strut, tie models

Procedia PDF Downloads 333
537 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach

Authors: Aboulkacem El Mehdi

Abstract:

We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.

Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics

Procedia PDF Downloads 294
536 Comparative Comparison (Cost-Benefit Analysis) of the Costs Caused by the Earthquake and Costs of Retrofitting Buildings in Iran

Authors: Iman Shabanzadeh

Abstract:

Earthquake is known as one of the most frequent natural hazards in Iran. Therefore, policy making to improve the strengthening of structures is one of the requirements of the approach to prevent and reduce the risk of the destructive effects of earthquakes. In order to choose the optimal policy in the face of earthquakes, this article tries to examine the cost of financial damages caused by earthquakes in the building sector and compare it with the costs of retrofitting. In this study, the results of adopting the scenario of "action after the earthquake" and the policy scenario of "strengthening structures before the earthquake" have been collected, calculated and finally analyzed by putting them together. Methodologically, data received from governorates and building retrofitting engineering companies have been used. The scope of the study is earthquakes occurred in the geographical area of Iran, and among them, eight earthquakes have been specifically studied: Miane, Ahar and Haris, Qator, Momor, Khorasan, Damghan and Shahroud, Gohran, Hormozgan and Ezgole. The main basis of the calculations is the data obtained from retrofitting companies regarding the cost per square meter of building retrofitting and the data of the governorate regarding the power of earthquake destruction, the realized costs for the reconstruction and construction of residential units. The estimated costs have been converted to the value of 2021 using the time value of money method to enable comparison and aggregation. The cost-benefit comparison of the two policies of action after the earthquake and retrofitting before the earthquake in the eight earthquakes investigated shows that the country has suffered five thousand billion Tomans of losses due to the lack of retrofitting of buildings against earthquakes. Based on the data of the Budget Law's of Iran, this figure was approximately twice the budget of the Ministry of Roads and Urban Development and five times the budget of the Islamic Revolution Housing Foundation in 2021. The results show that the policy of retrofitting structures before an earthquake is significantly more optimal than the competing scenario. The comparison of the two policy scenarios examined in this study shows that the policy of retrofitting buildings before an earthquake, on the one hand, prevents huge losses, and on the other hand, by increasing the number of earthquake-resistant houses, it reduces the amount of earthquake destruction. In addition to other positive effects of retrofitting, such as the reduction of mortality due to earthquake resistance of buildings and the reduction of other economic and social effects caused by earthquakes. These are things that can prove the cost-effectiveness of the policy scenario of "strengthening structures before earthquakes" in Iran.

Keywords: disaster economy, earthquake economy, cost-benefit analysis, resilience

Procedia PDF Downloads 63
535 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 160
534 A Quantitative Study on the “Unbalanced Phenomenon” of Mixed-Use Development in the Central Area of Nanjing Inner City Based on the Meta-Dimensional Model

Authors: Yang Chen, Lili Fu

Abstract:

Promoting urban regeneration in existing areas has been elevated to a national strategy in China. In this context, because of the multidimensional sustainable effect through the intensive use of land, mixed-use development has become an important objective for high-quality urban regeneration in the inner city. However, in the long period of time since China's reform and opening up, the "unbalanced phenomenon" of mixed-use development in China's inner cities has been very serious. On the one hand, the excessive focus on certain individual spaces has led to an increase in the level of mixed-use development in some areas, substantially ahead of others, resulting in a growing gap between different parts of the inner city; On the other hand, the excessive focus on a one-dimensional element of the spatial organization of mixed-use development, such as the enhancement of functional mix or spatial capacity, has led to a lagging phenomenon or neglect in the construction of other dimensional elements, such as pedestrian permeability, green environmental quality, social inclusion, etc. This phenomenon is particularly evident in the central area of the inner city, and it clearly runs counter to the need for sustainable development in China's new era. Therefore, a rational qualitative and quantitative analysis of the "unbalanced phenomenon" will help to identify the problem and provide a basis for the formulation of relevant optimization plans in the future. This paper builds a dynamic evaluation method of mixed-use development based on a meta-dimensional model and then uses spatial evolution analysis and spatial consistency analysis with ArcGIS software to reveal the "unbalanced phenomenon " in over the past 40 years of the central city area in Nanjing, a China’s typical city facing regeneration. This study result finds that, compared to the increase in functional mix and capacity, the dimensions of residential space mix, public service facility mix, pedestrian permeability, and greenness in Nanjing's city central area showed different degrees of lagging improvement, and the unbalanced development problems in each part of the city center are different, so the governance and planning plan for future mixed-use development needs to fully address these problems. The research methodology of this paper provides a tool for comprehensive dynamic identification of mixed-use development level’s change, and the results deepen the knowledge of the evolution of mixed-use development patterns in China’s inner cities and provide a reference basis for future regeneration practices.

Keywords: mixed-use development, unbalanced phenomenon, the meta-dimensional model, over the past 40 years of Nanjing, China

Procedia PDF Downloads 106
533 An Investigation of Tetraspanin Proteins’ Role in UPEC Infection

Authors: Fawzyah Albaldi

Abstract:

Urinary tract infections (UTIs) are the most prevalent of infectious diseases and > 80% are caused by uropathogenic E. coli (UPEC). Infection occurs following adhesion to urothelial plaques on bladder epithelial cells, whose major protein constituent are the uroplakins (UPs). Two of the four uroplakins (UPIa and UPIb) are members of the tetraspanin superfamily. The UPEC adhesin FimH is known to interact directly with UPIa. Tetraspanins are a diverse family of transmembrane proteins that generally act as “molecular organizers” by binding different proteins and lipids to form tetraspanin enriched microdomains (TEMs). Previous work by our group has shown that TEMs are involved in the adhesion of many pathogenic bacteria to human cells. Adhesion can be blocked by tetraspanin-derived synthetic peptides, suggesting that tetraspanins may be valuable drug targets. In this study, we investigate the role of tetraspanins in UPEC adherence to bladder epithelial cells. Human bladder cancer cell lines (T24, 5637, RT4), commonly used as in-vitro models to investigate UPEC infection, along with primary human bladder cells, were used in this project. The aim was to establish a model for UPEC adhesion/infection with the objective of evaluating the impact of tetraspanin-derived reagents on this process. Such reagents could reduce the progression of UTI, particularly in patients with indwelling catheters. Tetraspanin expression on the bladder cells was investigated by q-PCR and flow cytometry, with CD9 and CD81 generally highly expressed. Interestingly, despite these cell lines being used by other groups to investigate FimH antagonists, uroplakin proteins (UPIa, UPIb and UPIII) were poorly expressed at the cell surface, although some were present intracellularly. Attempts were made to differentiate the cell lines, to induce cell surface expression of these UPs, but these were largely unsuccessful. Pre-treatment of bladder epithelial cells with anti-CD9 monoclonal antibody significantly decreased UPEC infection, whilst anti-CD81 had no effects. A short (15aa) synthetic peptide corresponding to the large extracellular region (EC2) of CD9 also significantly reduced UPEC adherence. Furthermore, we demonstrated specific binding of that fluorescently tagged peptide to the cells. CD9 is known to associate with a number of heparan sulphate proteoglycans (HSPGs) that have also been implicated in bacterial adhesion. Here, we demonstrated that unfractionated heparin (UFH)and heparin analogs significantly inhibited UPEC adhesion to RT4 cells, as did pre-treatment of the cells with heparinases. Pre-treatment with chondroitin sulphate (CS) and chondroitinase also significantly decreased UPEC adherence to RT4 cells. This study may shed light on a common pathogenicity mechanism involving the organisation of HSPGs by tetraspanins. In summary, although we determined that the bladder cell lines were not suitable to investigate the role of uroplakins in UPEC adhesion, we demonstrated roles for CD9 and cell surface proteoglycans in this interaction. Agents that target these may be useful in treating/preventing UTIs.

Keywords: UTIs, tspan, uroplakins, CD9

Procedia PDF Downloads 104
532 Drying Shrinkage of Concrete: Scale Effect and Influence of Reinforcement

Authors: Qier Wu, Issam Takla, Thomas Rougelot, Nicolas Burlion

Abstract:

In the framework of French underground disposal of intermediate level radioactive wastes, concrete is widely used as a construction material for containers and tunnels. Drying shrinkage is one of the most disadvantageous phenomena of concrete structures. Cracks generated by differential shrinkage could impair the mechanical behavior, increase the permeability of concrete and act as a preferential path for aggressive species, hence leading to an overall decrease in durability and serviceability. It is of great interest to understand the drying shrinkage phenomenon in order to predict and even to control the strains of concrete. The question is whether the results obtained from laboratory samples are in accordance with the measurements on a real structure. Another question concerns the influence of reinforcement on drying shrinkage of concrete. As part of a global project with Andra (French National Radioactive Waste Management Agency), the present study aims to experimentally investigate the scale effect as well as the influence of reinforcement on the development of drying shrinkage of two high performance concretes (based on CEM I and CEM V cements, according to European standards). Various sizes of samples are chosen, from ordinary laboratory specimens up to real-scale specimens: prismatic specimens with different volume-to-surface (V/S) ratios, thin slices (thickness of 2 mm), cylinders with different sizes (37 and 160 mm in diameter), hollow cylinders, cylindrical columns (height of 1000 mm) and square columns (320×320×1000 mm). The square columns have been manufactured with different reinforcement rates and can be considered as mini-structures, to approximate the behavior of a real voussoir from the waste disposal facility. All the samples are kept, in a first stage, at 20°C and 50% of relative humidity (initial conditions in the tunnel) in a specific climatic chamber developed by the Laboratory of Mechanics of Lille. The mass evolution and the drying shrinkage are monitored regularly. The obtained results show that the specimen size has a great impact on water loss and drying shrinkage of concrete. The specimens with a smaller V/S ratio and a smaller size have a bigger drying shrinkage. The correlation between mass variation and drying shrinkage follows the same tendency for all specimens in spite of the size difference. However, the influence of reinforcement rate on drying shrinkage is not clear based on the present results. The second stage of conservation (50°C and 30% of relative humidity) could give additional results on these influences.

Keywords: concrete, drying shrinkage, mass evolution, reinforcement, scale effect

Procedia PDF Downloads 185
531 Unfolding Architectural Assemblages: Mapping Contemporary Spatial Objects' Affective Capacity

Authors: Panagiotis Roupas, Yota Passia

Abstract:

This paper aims at establishing an index of design mechanisms - immanent in spatial objects - based on the affective capacity of their material formations. While spatial objects (design objects, buildings, urban configurations, etc.) are regarded as systems composed of interacting parts, within the premises of assemblage theory, their ability to affect and to be affected has not yet been mapped or sufficiently explored. This ability lies in excess, a latent potentiality they contain, not transcendental but immanent in their pre-subjective aesthetic power. As spatial structures are theorized as assemblages - composed of heterogeneous elements that enter into relations with one another - and since all assemblages are parts of larger assemblages, their components' ability to engage is contingent. We thus seek to unfold the mechanisms inherent in spatial objects that allow to the constituent parts of design assemblages to perpetually enter into new assemblages. To map architectural assemblage's affective ability, spatial objects are analyzed in two axes. The first axis focuses on the relations that the assemblage's material and expressive components develop in order to enter the assemblages. Material components refer to those material elements that an assemblage requires in order to exist, while expressive components includes non-linguistic (sense impressions) as well as linguistic (beliefs). The second axis records the processes known as a-signifying signs or a-signs, which are the triggering mechanisms able to territorialize or deterritorialize, stabilize or destabilize the assemblage and thus allow it to assemble anew. As a-signs cannot be isolated from matter, we point to their resulting effects, which without entering the linguistic level they are expressed in terms of intensity fields: modulations, movements, speeds, rhythms, spasms, etc. They belong to a molecular level where they operate in the pre-subjective world of perceptions, effects, drives, and emotions. A-signs have been introduced as intensities that transform the object beyond meaning, beyond fixed or known cognitive procedures. To that end, from an archive of more than 100 spatial objects by contemporary architects and designers, we have created an effective mechanisms index is created, where each a-sign is now connected with the list of effects it triggers and which thoroughly defines it. And vice versa, the same effect can be triggered by different a-signs, allowing the design object to lie in a perpetual state of becoming. To define spatial objects, A-signs are categorized in terms of their aesthetic power to affect and to be affected on the basis of the general categories of form, structure and surface. Thus, different part's degree of contingency are evaluated and measured and finally, we introduce as material information that is immanent in the spatial object while at the same time they confer no meaning; they only convey some information without semantic content. Through this index, we are able to analyze and direct the final form of the spatial object while at the same time establishing the mechanism to measure its continuous transformation.

Keywords: affective mechanisms index, architectural assemblages, a-signifying signs, cartography, virtual

Procedia PDF Downloads 129
530 The Spatial Analysis of Wetland Ecosystem Services Valuation on Flood Protection in Tone River Basin

Authors: Tingting Song

Abstract:

Wetlands are significant ecosystems that provide a variety of ecosystem services for humans, such as, providing water and food resources, purifying water quality, regulating climate, protecting biodiversity, and providing cultural, recreational, and educational resources. Wetlands also provide benefits, such as reduction of flood, storm damage, and soil erosion. The flood protection ecosystem services of wetlands are often ignored. Due to climate change, the flood caused by extreme weather in recent years occur frequently. Flood has a great impact on people's production and life with more and more economic losses. This study area is in the Tone river basin in the Kanto area, Japan. It is the second-longest river with the largest basin area in Japan, and it is still suffering heavy economic losses from floods. Tone river basin is one of the rivers that provide water for Tokyo and has an important impact on economic activities in Japan. The purpose of this study was to investigate land-use changes of wetlands in the Tone River Basin, and whether there are spatial differences in the value of wetland functions in mitigating economic losses caused by floods. This study analyzed the land-use change of wetland in Tone River, based on the Landsat data from 1980 to 2020. Combined with flood economic loss, wetland area, GDP, population density, and other social-economic data, a geospatial weighted regression model was constructed to analyze the spatial difference of wetland ecosystem service value. Now, flood protection mainly relies on such a hard project of dam and reservoir, but excessive dependence on hard engineering will cause the government huge financial pressure and have a big impact on the ecological environment. However, natural wetlands can also play a role in flood management, at the same time they can also provide diverse ecosystem services. Moreover, the construction and maintenance cost of natural wetlands is lower than that of hard engineering. Although it is not easy to say which is more effective in terms of flood management. When the marginal value of a wetland is greater than the economic loss caused by flood per unit area, it may be considered to rely on the flood storage capacity of the wetland to reduce the impact of the flood. It can promote the sustainable development of wetlands ecosystem. On the other hand, spatial analysis of wetland values can provide a more effective strategy for flood management in the Tone river basin.

Keywords: wetland, geospatial weighted regression, ecosystem services, environment valuation

Procedia PDF Downloads 101
529 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications

Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon

Abstract:

The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.

Keywords: analysis, automated fibre placement, high speed, splicing

Procedia PDF Downloads 155
528 Biomedicine, Suffering, and Sacrifice: Myths and Prototypes in Cell and Gene Therapies

Authors: Edison Bicudo

Abstract:

Cell and gene therapies (CGTs) result from the intense manipulation of cells or the use of techniques such as gene editing. They have been increasingly used to tackle rare diseases or conditions of genetic origin, such as cancer. One might expect such a complex scientific field to be dominated by scientific findings and evidence-based explanations. However, people engaged in scientific argumentation also mobilize a range of cognitive operations of which they are not fully aware, in addition to drawing on widely available oral traditions. This paper analyses how experts discussing the potentialities and challenges of CGTs have recourse to a particular kind of prototypical myth. This sociology study, conducted at the University of Sussex (UK), involved interviews with scientists, regulators, and entrepreneurs involved in the development or governance of CGTs. It was observed that these professionals, when voicing their views, sometimes have recourse to narratives where CGTs appear as promising tools for alleviating or curing diseases. This is said to involve much personal, scientific, and financial sacrifice. In his study of traditional narratives, Hogan identified three prototypes: the romantic narrative, moved by the ideal of romantic union; the heroic narrative, moved by the desire for political power; and the sacrificial narrative, where the ideal is plenty, well-being, and health. It is argued here that discourses around CGTs often involve some narratives – or myths – that have a sacrificial nature. In this sense, the development of innovative therapies is depicted as a huge sacrificial endeavor involving biomedical scientists, biotech and pharma companies, and decision-makers. These sacrificial accounts draw on oral traditions and benefit from an emotional intensification that can be easily achieved in stories of serious diseases and physical suffering. Furthermore, these accounts draw on metaphorical understandings where diseases and vectors of diseases are considered enemies or invaders while therapies are framed as shields or protections. In this way, this paper aims to unravel the cognitive underpinnings of contemporary science – and, more specifically, biomedicine – revealing how myths, prototypes, and metaphors are highly operative even when complex reasoning is at stake. At the same time, this paper demonstrates how such hidden cognitive operations underpin the construction of powerful ideological discourses aimed at defending certain ways of developing, disseminating, and governing technologies and therapies.

Keywords: cell and gene therapies, myths, prototypes, metaphors

Procedia PDF Downloads 21
527 Introducing an Innovative Structural Fuse for Creation of Repairable Buildings with See-Saw Motion during Earthquake and Investigating It by Nonlinear Finite Element Modeling

Authors: M. Hosseini, N. Ghorbani Amirabad, M. Zhian

Abstract:

Seismic design codes accept structural and nonstructural damages after the sever earthquakes (provided that the building is prevented from collapse), so that in many cases demolishing and reconstruction of the building is inevitable, and this is usually very difficult, costly and time consuming. Therefore, designing and constructing of buildings in such a way that they can be easily repaired after earthquakes, even major ones, is quite desired. For this purpose giving the possibility of rocking or see-saw motion to the building structure, partially or as a whole, has been used by some researchers in recent decade .the central support which has a main role in creating the possibility of see-saw motion in the building’s structural system. In this paper, paying more attention to the key role of the central fuse and support, an innovative energy dissipater which can act as the central fuse and support of the building with seesaw motion is introduced, and the process of reaching an optimal geometry for that by using finite element analysis is presented. Several geometric shapes were considered for the proposed central fuse and support. In each case the hysteresis moment rotation behavior of the considered fuse were obtained under simultaneous effect of vertical and horizontal loads, by nonlinear finite element analyses. To find the optimal geometric shape, the maximum plastic strain value in the fuse body was considered as the main parameter. The rotational stiffness of the fuse under the effect of acting moments is another important parameter for finding the optimum shape. The proposed fuse and support can be called Yielding Curved Bars and Clipped Hemisphere Core (YCB&CHC or more briefly YCB) energy dissipater. Based on extensive nonlinear finite element analyses it was found out the using rectangular section for the curved bars gives more reliable results. Then, the YCB energy dissipater with the optimal shape was used in a structural model of a 12 story regular building as its central fuse and support to give it the possibility of seesaw motion, and its seismic responses were compared to those of a the building in the fixed based conditions, subjected to three-components acceleration of several selected earthquakes including Loma Prieta, Northridge, and Park Field. In building with see-saw motion some simple yielding-plate energy dissipaters were also used under circumferential columns.The results indicated that equipping the buildings with central and circumferential fuses result in remarkable reduction of seismic responses of the building, including the base shear, inter story drift, and roof acceleration. In fact by using the proposed technique the plastic deformations are concentrated in the fuses in the lowest story of the building, so that the main body of the building structure remains basically elastic, and therefore, the building can be easily repaired after earthquake.

Keywords: rocking mechanism, see-saw motion, finite element analysis, hysteretic behavior

Procedia PDF Downloads 408
526 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 114
525 Quality Care from the Perception of the Patient in Ambulatory Cancer Services: A Qualitative Study

Authors: Herlin Vallejo, Jhon Osorio

Abstract:

Quality is a concept that has gained importance in different scenarios over time, especially in the area of health. The nursing staff is one of the actors that contributes most to the care process and the satisfaction of the users in the evaluation of quality. However, until now, there are few tools to measure the quality of care in specialized performance scenarios. Patients receiving ambulatory cancer treatments can face various problems, which can increase their level of distress, so improving the quality of outpatient care for cancer patients should be a priority for oncology nursing. The experience of the patient in relation to the care in these services has been little investigated. The purpose of this study was to understand the perception that patients have about quality care in outpatient chemotherapy services. A qualitative, exploratory, descriptive study was carried out in 9 patients older than 18 years, diagnosed with cancer, who were treated at the Institute of Cancerology, in outpatient chemotherapy rooms, with a minimum of three months of treatment with curative intention and which had given your informed consent. The total of participants was determined by the theoretical saturation, and the selection of these was for convenience. Unstructured interviews were conducted, recorded and transcribed. The analysis of the information was done under the technique of content analysis. Three categories emerged that reflect the perception that patients have regarding quality care: patient-centered care, care with love and effects of care. Patients highlighted situations that show that care is centered on them, incorporating elements of patient-centered care from the institutional, infrastructure, qualities of care and what for them, in contrast, means inappropriate care. Care with love as a perception of quality care means for patients that the nursing staff must have certain qualities, perceive caring with love as a family affair, limits on care with love and the nurse-patient relationship. Quality care has effects on both the patient and the nursing staff. One of the most relevant effects was the confidence that the patient develops towards the nurse, besides to transform the unreal images about cancer treatment with chemotherapy. On the other hand, care with quality generates a commitment to self-care and is a facilitator in the transit of oncological disease and chemotherapeutic treatment, but from the perception of a healing transit. It is concluded that care with quality from the perception of patients, is a construction that goes beyond the structural issues and is related to an institutional culture of quality that is reflected in the attitude of the nursing staff and in the acts of Care that have positive effects on the experience of chemotherapy and disease. With the results, it contributes to better understand how quality care is built from the perception of patients and to open a range of possibilities for the future development of an individualized instrument that allows evaluating the quality of care from the perception of patients with cancer.

Keywords: nursing care, oncology service hospital, quality management, qualitative studies

Procedia PDF Downloads 137
524 Religious Discourses and Their Impact on Regional and Global Geopolitics: A Study of Deobandi in India, Pakistan and Afghanistan

Authors: Soumya Awasthi

Abstract:

The spread of radical ideology is possible not merely through public meetings, protests, and mosques but even in schools, seminaries, and madrasas. The rhetoric created around the relationship between religion and conflict has been the primary factor for instigating global conflicts – when religion is used to achieve broader objectives. There have been numerous cases of religion-driven conflict around the world be it the Jewish revolt between 66 AD and 628 AD or the 1119 AD the Crusades revolt or during the Cold War period or the rise of right-wing politics in India. Some of the major developments which reiterate the significance of religion in the contemporary times include: (1) The emergence of theocracy in Iran in 1979 (2) Resurgence of world-wide religious beliefs in post-Soviet space. (3) Emergence of transnational terrorism shaped by twisted depiction of Islam by the self proclaimed protectors of the religion. Therefore this paper is premised in the argument that religion has always found itself on the periphery of the discipline of International Relations (IR), and has received less attention than it deserves. The focus of the topic is on the discourses of ‘Deobandi’ and its impact both on the geopolitics of the region- particularly in India, Pakistan, and Afghanistan- and also at the global level. Discourse is a mechanism in use since time immemorial and has been a key tool to mobilise masses against the ruling authority. With the help of field surveys, qualitative and analytical method of research in religion and international relations, it has been found that they are numerous madrassas that are running illegally and are unregistered. These seminaries are operating in the Khyber-Pakhtunkhwa and Federally Administered Tribal Area (FATA). During the Soviet invasion of Afghanistan in 1979, relation between religion and geopolitics was highlighted when there was a sudden spread of radical ideas, finding support from countries like Saudi Arabia (who funded the campaign) and Pakistan (which organised the Saudi funds and set up training camps, both educational and military). During this period there was a huge influence of Wahabi theology on the madrasas which started with Deoband philosophy and later became a mix of Wahabi (influenced by Ahmad Ibn Hannabal and Ibn Taimmiya) and Deobandi philosophy, tending towards fundamentalism. Later the impact of regional geopolitics had influence on the global geopolitics when the incidents like attack on the US in 2001, bomb blasts in U.K, Indonesia, Turkey, and Israel in 2000s. In the midst of all this, there were several scholars who pointed towards Deobandi Philosophy as one of the drivers in the creation of armed Islamic groups in Pakistan, Afghanistan. Hence this paper will make an attempt to understand the trend as to how Deobandi religious discourses originating from India have changed over the decades, and who the agents of such changes are. It will throw light on Deoband from pre-independence till date to create a narrative around the religious discourses and Deobandi philosophy and its spill over impact on the map of global and regional security.

Keywords: Deobandi School of Thought, radicalization, regional and global geopolitics, religious discourses, Whabi movement

Procedia PDF Downloads 218
523 A Novel Nanocomposite Membrane Designed for the Treatment of Oil/Gas Produced Water

Authors: Zhaoyang Liu, Detao Qin, Darren Delai Sun

Abstract:

The onshore production of oil and gas (for example, shale gas) generates large quantities of wastewater, referred to be ‘produced water’, which contains high contents of oils and salts. The direct discharge of produced water, if not appropriately treated, can be toxic to the environment and human health. Membrane filtration has been deemed as an environmental-friendly and cost-effective technology for treating oily wastewater. However, conventional polymeric membranes have their drawbacks of either low salt rejection rate or high membrane fouling tendency when treating oily wastewater. Recent years, forward osmosis (FO) membrane filtration has emerged as a promising technology with its unique advantages of low operation pressure and less membrane fouling tendency. However, until now there is still no report about FO membranes specially designed and fabricated for treating the oily and salty produced water. In this study, a novel nanocomposite FO membrane was developed specially for treating oil- and salt-polluted produced water. By leveraging the recent advance of nanomaterials and nanotechnology, this nanocomposite FO membrane was designed to be made of double layers: an underwater oleophobic selective layer on top of a nanomaterial infused polymeric support layer. Wherein, graphene oxide (GO) nanosheets were selected to add into the polymeric support layer because adding GO nanosheets can optimize the pore structures of the support layer, thus potentially leading to high water flux for FO membranes. In addition, polyvinyl alcohol (PVA) hydrogel was selected as the selective layer because hydrated and chemically-crosslinked PVA hydrogel is capable of simultaneously rejecting oil and salt. After nanocomposite FO membranes were fabricated, the membrane structures were systematically characterized with the instruments of TEM, FESEM, XRD, ATR-FTIR, surface zeta-potential and Contact angles (CA). The membrane performances for treating produced waters were tested with the instruments of TOC, COD and Ion chromatography. The working mechanism of this new membrane was also analyzed. Very promising experimental results have been obtained. The incorporation of GO nanosheets can reduce internal concentration polarization (ICP) effect in the polymeric support layer. The structural parameter (S value) of the new FO membrane is reduced by 23% from 265 ± 31 μm to 205 ± 23 μm. The membrane tortuosity (τ value) is decreased by 20% from 2.55 ± 0.19 to 2.02 ± 0.13 μm, which contributes to the decrease of S value. Moreover, the highly-hydrophilic and chemically-cross-linked hydrogel selective layer present high antifouling property under saline oil/water emulsions. Compared with commercial FO membrane, this new FO membrane possesses three times higher water flux, higher removal efficiencies for oil (>99.9%) and salts (>99.7% for multivalent ions), and significantly lower membrane fouling tendency (<10%). To our knowledge, this is the first report of a nanocomposite FO membrane with the combined merits of high salt rejection, high oil repellency and high water flux for treating onshore oil/gas produced waters. Due to its outstanding performance and ease of fabrication, this novel nanocomposite FO membrane possesses great application potential in wastewater treatment industry.

Keywords: nanocomposite, membrane, polymer, graphene oxide

Procedia PDF Downloads 250
522 Structural Health Assessment of a Masonry Bridge Using Wireless

Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep

Abstract:

Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.

Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies

Procedia PDF Downloads 171
521 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context

Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue

Abstract:

The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.

Keywords: data analytics, smart cities, competitive advantage, internet of things

Procedia PDF Downloads 136
520 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection

Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino

Abstract:

The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.

Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem

Procedia PDF Downloads 93
519 Fucoidan: A Potent Seaweed-Derived Polysaccharide with Immunomodulatory and Anti-inflammatory Properties

Authors: Tauseef Ahmad, Muhammad Ishaq, Mathew Eapen, Ahyoung Park, Sam Karpiniec, Vanni Caruso, Rajaraman Eri

Abstract:

Fucoidans are complex, fucose-rich sulfated polymers discovered in brown seaweeds. Fucoidans are popular around the world, particularly in the nutraceutical and pharmaceutical industries, due to their promising medicinal properties. Fucoidans have been shown to have a variety of biological activities, including anti-inflammatory effects. They are known to inhibit inflammatory processes through a variety of mechanisms, including enzyme inhibition and selectin blockade. Inflammation is a part of the complicated biological response of living systems to damaging stimuli, and it plays a role in the pathogenesis of a variety of disorders, including arthritis, inflammatory bowel disease, cancer, and allergies. In the current investigation, various fucoidan extracts from Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica were assessed for inhibition of pro-inflammatory cytokine production (TNF-α, IL-1β, and IL-6) in LPS induced human macrophage cell line (THP-1) and human peripheral blood mononuclear cells (PBMCs). Furthermore, we also sought to catalogue these extracts based on their anti-inflammatory effects in the current in-vitro cell model. Materials and Methods: To assess the cytotoxicity of fucoidan extracts, MTT (3-[4,5-dimethylthiazol-2-yl]-2,5, -diphenyltetrazolium bromide) cell viability assay was performed. Furthermore, a dose-response for fucoidan extracts was performed in LPS induced THP-1 cells and PBMCs after pre-treatment for 24 hours, and levels of TNF-α, IL-1β, and IL-6 cytokines were measured using Enzyme-Linked Immunosorbent Assay (ELISA). Results: The MTT cell viability assay demonstrated that fucoidan extracts exhibited no evidence of cytotoxicity in THP-1 cells or PBMCs after 48 hours of incubation. The results of the sandwich ELISA revealed that all fucoidan extracts suppressed cytokine production in LPS-stimulated PBMCs and human THP-1 cells in a dose-dependent manner. Notably, at lower concentrations, the lower molecular fucoidan (5-30 kDa) extract from Macrocystis pyrifera was a highly efficient inhibitor of pro-inflammatory cytokines. Fucoidan extracts from all species including Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica exhibited significant anti-inflammatory effects. These findings on several fucoidan extracts provide insight into strategies for improving their efficacy against inflammation-related diseases. Conclusion: In the current research, we have successfully catalogued several fucoidan extracts based on their efficiency in LPS-induced macrophages and PBMCs in downregulating the key pro-inflammatory cytokines (TNF-, IL-1 and IL-6), which are prospective targets in human inflammatory illnesses. Further research would provide more information on the mechanism of action, allowing it to be tested for therapeutic purposes as an anti-inflammatory medication.

Keywords: fucoidan, PBMCs, THP-1, TNF-α, IL-1β, IL-6, inflammation

Procedia PDF Downloads 59
518 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 134
517 Shock-Induced Densification in Glass Materials: A Non-Equilibrium Molecular Dynamics Study

Authors: Richard Renou, Laurent Soulard

Abstract:

Lasers are widely used in glass material processing, from waveguide fabrication to channel drilling. The gradual damage of glass optics under UV lasers is also an important issue to be addressed. Glass materials (including metallic glasses) can undergo a permanent densification under laser-induced shock loading. Despite increased interest on interactions between laser and glass materials, little is known about the structural mechanisms involved under shock loading. For example, the densification process in silica glasses occurs between 8 GPa and 30 GPa. Above 30 GPa, the glass material returns to the original density after relaxation. Investigating these unusual mechanisms in silica glass will provide an overall better understanding in glass behaviour. Non-Equilibrium Molecular Dynamics simulations (NEMD) were carried out in order to gain insight on the silica glass microscopic structure under shock loading. The shock was generated by the use of a piston impacting the glass material at high velocity (from 100m/s up to 2km/s). Periodic boundary conditions were used in the directions perpendicular to the shock propagation to model an infinite system. One-dimensional shock propagations were therefore studied. Simulations were performed with the STAMP code developed by the CEA. A very specific structure is observed in a silica glass. Oxygen atoms around Silicon atoms are organized in tetrahedrons. Those tetrahedrons are linked and tend to form rings inside the structure. A significant amount of empty cavities is also observed in glass materials. In order to understand how a shock loading is impacting the overall structure, the tetrahedrons, the rings and the cavities were thoroughly analysed. An elastic behaviour was observed when the shock pressure is below 8 GPa. This is consistent with the Hugoniot Elastic Limit (HEL) of 8.8 GPa estimated experimentally for silica glasses. Behind the shock front, the ring structure and the cavity distribution are impacted. The ring volume is smaller, and most cavities disappear with increasing shock pressure. However, the tetrahedral structure is not affected. The elasticity of the glass structure is therefore related to a ring shrinking and a cavity closing. Above the HEL, the shock pressure is high enough to impact the tetrahedral structure. An increasing number of hexahedrons and octahedrons are formed with the pressure. The large rings break to form smaller ones. The cavities are however not impacted as most cavities are already closed under an elastic shock. After the material relaxation, a significant amount of hexahedrons and octahedrons is still observed, and most of the cavities remain closed. The overall ring distribution after relaxation is similar to the equilibrium distribution. The densification process is therefore related to two structural mechanisms: a change in the coordination of silicon atoms and a cavity closing. To sum up, non-equilibrium molecular dynamics were carried out to investigate silica behaviour under shock loading. Analysing the structure lead to interesting conclusions upon the elastic and the densification mechanisms in glass materials. This work will be completed with a detailed study of the mechanism occurring above 30 GPa, where no sign of densification is observed after the material relaxation.

Keywords: densification, molecular dynamics simulations, shock loading, silica glass

Procedia PDF Downloads 222