Search results for: dynamic modelling
628 Cyclic Stress and Masing Behaviour of Modified 9Cr-1Mo at RT and 300 °C
Authors: Preeti Verma, P. Chellapandi, N.C. Santhi Srinivas, Vakil Singh
Abstract:
Modified 9Cr-1Mo steel is widely used for structural components like heat exchangers, pressure vessels and steam generator in the nuclear reactors. It is also found to be a candidate material for future metallic fuel sodium cooled fast breeder reactor because of its high thermal conductivity, lower thermal expansion coefficient, micro structural stability, high irradiation void swelling resistance and higher resistance to stress corrosion cracking in water-steam systems compared to austenitic stainless steels. The components of steam generators that operate at elevated temperatures are often subjected to repeated thermal stresses as a result of temperature gradients which occur on heating and cooling during start-ups and shutdowns or during variations in operating conditions of a reactor. These transient thermal stresses give rise to LCF damage. In the present investigation strain controlled low cycle fatigue tests were conducted at room temperature and 300 °C in normalized and tempered condition using total strain amplitudes in the range from ±0.25% to ±0.5% at strain rate of 10-2 s-1. Cyclic Stress response at high strain amplitudes (±0.31% to ±0.5%) showed initial softening followed by hardening upto a few cycles and subsequent softening till failure. The extent of softening increased with increase in strain amplitude and temperature. Depends on the strain amplitude of the test the stress strain hysteresis loops displayed Masing behaviour at higher strain amplitudes and non-Masing at lower strain amplitudes at both the temperatures. It is quite opposite to the usual Masing and Non-Masing behaviour reported earlier for different materials. Low cycle fatigue damage was evaluated in terms of plastic strain and plastic strain energy approach at room temperature and 300 °C. It was observed that the plastic strain energy approach was found to be more closely matches with the experimental fatigue lives particularly, at 300 °C where dynamic strain aging was observed.Keywords: Modified 9Cr-mo steel, low cycle fatigue, Masing behavior, cyclic softening
Procedia PDF Downloads 443627 Research on Energy Field Intervening in Lost Space Renewal Strategy
Authors: Tianyue Wan
Abstract:
Lost space is the space that has not been used for a long time and is in decline, proposed by Roger Trancik. And in his book Finding Lost Space: Theories of Urban Design, the concept of lost space is defined as those anti-traditional spaces that are unpleasant, need to be redesigned, and have no benefit to the environment and users. They have no defined boundaries and do not connect the various landscape elements in a coherent way. With the rapid development of urbanization in China, the blind areas of urban renewal have become a chaotic lost space that is incompatible with the rapid development of urbanization. Therefore, lost space needs to be reconstructed urgently under the background of infill development and reduction planning in China. The formation of lost space is also an invisible division of social hierarchy. This paper tries to break down the social class division and the estrangement between people through the regeneration of lost space. Ultimately, it will enhance vitality, rebuild a sense of belonging, and create a continuous open public space for local people. Based on the concept of lost space and energy field, this paper clarifies the significance of the energy field in the lost space renovation. Then it introduces the energy field into lost space by using the magnetic field in physics as a prototype. The construction of the energy field is support by space theory, spatial morphology analysis theory, public communication theory, urban diversity theory and city image theory. Taking Wuhan’s Lingjiao Park of China as an example, this paper chooses the lost space on the west side of the park as the research object. According to the current situation of this site, the energy intervention strategies are proposed from four aspects: natural ecology, space rights, intangible cultural heritage and infrastructure configuration. And six specific lost space renewal methods are used in this work, including “riveting”, “breakthrough”, “radiation”, “inheritance”, “connection” and “intersection”. After the renovation, space will be re-introduced into the active crow. The integration of activities and space creates a sense of place, improve the walking experience, restores the vitality of the space, and provides a reference for the reconstruction of lost space in the city.Keywords: dynamic vitality intervention, lost space, space vitality, sense of place
Procedia PDF Downloads 112626 The Impact of Geopolitical Risks and the Oil Price Fluctuations on the Kuwaiti Financial Market
Authors: Layal Mansour
Abstract:
The aim of this paper is to identify whether oil price volatility or geopolitical risks can predict future financial stress periods or economic recessions in Kuwait. We construct the first Financial Stress Index for Kuwait (FSIK) that includes informative vulnerable indicators of the main financial sectors: the banking sector, the equities market, and the foreign exchange market. The study covers the period from 2000 to 2020, so it includes the two recent most devastating world economic crises with oil price fluctuation: the Covid-19 pandemic crisis and Ukraine-Russia War. All data are taken by the central bank of Kuwait, the World Bank, IMF, DataStream, and from Federal Reserve System St Louis. The variables are computed as the percentage growth rate, then standardized and aggregated into one index using the variance equal weights method, the most frequently used in the literature. The graphical FSIK analysis provides detailed information (by dates) to policymakers on how internal financial stability depends on internal policy and events such as government elections or resignation. It also shows how monetary authorities or internal policymakers’ decisions to relieve personal loans or increase/decrease the public budget trigger internal financial instability. The empirical analysis under vector autoregression (VAR) models shows the dynamic causal relationship between the oil price fluctuation and the Kuwaiti economy, which relies heavily on the oil price. Similarly, using vector autoregression (VAR) models to assess the impact of the global geopolitical risks on Kuwaiti financial stability, results reveal whether Kuwait is confronted with or sheltered from geopolitical risks. The Financial Stress Index serves as a guide for macroprudential regulators in order to understand the weakness of the overall Kuwaiti financial market and economy regardless of the Kuwaiti dinar strength and exchange rate stability. It helps policymakers predict future stress periods and, thus, address alternative cushions to confront future possible financial threats.Keywords: Kuwait, financial stress index, causality test, VAR, oil price, geopolitical risks
Procedia PDF Downloads 81625 Dynamic Ambulance Deployment to Reduce Ambulance Response Times Using Geographic Information Systems
Authors: Masoud Swalehe, Semra Günay
Abstract:
Developed countries are losing many lives to non-communicable diseases as compared to their developing counterparts. The effects of these diseases are mostly sudden and manifest at a very short time prior to death or a dangerous attack and this has consolidated the significance of emergency medical system (EMS) as one of the vital areas of healthcare service delivery. The primary objective of this research is to reduce ambulance response times (RT) of Eskişehir province EMS since a number of studies have established a relationship between ambulance response times and survival chances of patients especially out of hospital cardiac arrest (OHCA) victims. It has been found out that patients who receive out of hospital medical attention in few (4) minutes after cardiac arrest because of low ambulance response times stand higher chances of survival than their counterparts who take longer times (more than 12 minutes) to receive out of hospital medical care because of higher ambulance response times. The study will make use of geographic information systems (GIS) technology to dynamically reallocate ambulance resources according to demand and time so as to reduce ambulance response times. Geospatial-time distribution of ambulance calls (demand) will be used as a basis for optimal ambulance deployment using system status management (SSM) strategy to achieve much demand coverage with the same number of ambulance resources to cause response time reduction. Drive-time polygons will be used to come up with time specific facility coverage areas and suggesting additional facility candidate sites where ambulance resources can be moved to serve higher demands making use of network analysis techniques. Emergency Ambulance calls’ data from 1st January 2014 to 31st December 2014 obtained from Eskişehir province health directorate will be used in this study. This study will focus on the reduction of ambulance response times which is a key Emergency Medical Services performance indicator.Keywords: emergency medical services, system status management, ambulance response times, geographic information system, geospatial-time distribution, out of hospital cardiac arrest
Procedia PDF Downloads 300624 Input and Interaction as Training for Cognitive Learning: Variation Sets Influence the Sudden Acquisition of Periphrastic estar 'to be' + verb + -ndo*
Authors: Mary Rosa Espinosa-Ochoa
Abstract:
Some constructions appear suddenly in children’s speech and are productive from the beginning. These constructions are supported by others, previously acquired, with which they share semantic and pragmatic features. Thus, for example, the acquisition of the passive voice in German is supported by other constructions with which it shares the lexical verb sein (“to be”). This also occurs in Spanish, in the acquisition of the progressive aspectual periphrasis estar (“to be”) + verb root + -ndo (present participle), supported by locative constructions acquired earlier with the same verb. The periphrasis shares with the locative constructions not only the lexical verb estar, but also pragmatic relations. Both constructions can be used to answer the question ¿Dónde está? (“Where is he/she/it?”), whose answer could be either Está aquí (“He/she/it is here”) or Se está bañando (“He/she/it is taking a bath”).This study is a corpus-based analysis of two children (1;08-2;08) and the input directed to them: it proposes that the pragmatic and semantic support from previously-acquired constructions comes from the input, during interaction with others. This hypothesis is based on analysis of constructions with estar, whose use to express temporal change (which differentiates it from its counterpart ser [“to be”]), is given in variation sets, similar to those described by Küntay and Slobin (2002), that allow the child to perceive the change of place experienced by nouns that function as its grammatical subject. For example, at different points during a bath, the mother says: El jabón está aquí “The soap is here” (beginning of bath); five minutes later, the soap has moved, and the mother says el jabón está ahí “the soap is there”; the soap moves again later on and she says: el jabón está abajo de ti “the soap is under you”. “The soap” is the grammatical subject of all of these utterances. The Spanish verb + -ndo is a progressive phase aspect encoder of a dynamic state that generates a token. The verb + -ndo is also combined with verb estar to encode. It is proposed here that the phases experienced in interaction with the adult, in events related to the verb estar, allow a child to generate this dynamicity and token reading of the verb + -ndo. In this way, children begin to produce the periphrasis suddenly and productively, even though neither the periphrasis nor the verb + -ndo itself are frequent in adult speech.Keywords: child language acquisition, input, variation sets, Spanish language
Procedia PDF Downloads 150623 Morphological Investigation of Sprawling Along Emerging Peri-Urban Transit Corridor of Mowe-Ibafo Axis of the Lagos Megacity Region
Authors: Folayele Oluyemi Akindeju, Tobi Joseph Ajoro
Abstract:
The city as a complex system exhibiting chaotic behaviour is in a state of constant change, in response to prevailing social, economic, environmental and technological factors. Without adequate investigation and control mechanisms to tame the sporadic nature of growth in most urban areas of cities in developing regions, organic sprawling visibly manifests with its attendant problems, most especially at peri-urban areas. The Lagos Megacity region in southwest Nigeria, as one of the largest megacities in the world contends with the challenges of sprawling at the peri-urban areas especially along emerging transit corridors. Due to the seemingly unpredictable nature of this growth, this paper attempts a morphological investigation into the growth of peri-urban settlements along the Mowe-Ibafo transit corridor of the Megacity region over a temporal space of three decades (1984-2014). This study adopts the application of the Fractal Analysis and Regression Analysis methods through the correlation of population density and fractal dimension values to establish the pattern and nature of growth, due to the inadequacies of conventional methods of urban analysis which cannot deal with the unpredictability of such complex urban forms as the peri-urban areas. It was deduced that the dynamic urban expansion in the last three decades resulted in about 74.2% urban change rate between 1984 and 2000 and 63.4% urban change rate between 2000 and 2014. With the R2 value between the fractal dimension and population density been 1, the regression model indicates a positive correlation between Fractal Dimension (D) and Population Density (pop/km2), where the increase in the population density from 5740 pop/km2 to 8060 pop/km2 and later decrease to 7580 pop/km2 leads to an increase in the fractal dimension of urban growth from 1.451 in 1984 to 1.853 in 2014. This, therefore, justifies the ability to predict and determine the nature and direction of growth of complex entities and is sufficient to substantially suggest the need for adequate policy framework towards sustainable urban planning and infrastructural provision in the Peri-urban areas.Keywords: fractal analysis, Lagos Megacity, peri-urban, sprawling, urban morphology
Procedia PDF Downloads 174622 Cultivating Social-Ecological Resilience, Harvesting Biocultural Resistance in Southern Andes
Authors: Constanza Monterrubio-Solis, Jose Tomas Ibarra
Abstract:
The fertile interdependence of social-ecological systems reveals itself in the interactions between native forests and seeds, home gardens, kitchens, foraging activities, local knowledge, and food practices, creating particular flavors and food meanings as part of cultural identities within territories. Resilience in local-food systems, from a relational perspective, can be understood as the balance between persistence and adaptability to change. Food growing, preparation, and consumption are constantly changing and adapting as expressions of agency of female and male indigenous peoples and peasants. This paper explores local food systems’ expressions of resilience in the la Araucanía region of Chile, namely: diversity, redundancy, buffer capacity, modularity, self-organization, governance, learning, equity, and decision-making. Applying ethnographic research methods (participant observation, focus groups, and semi-structured interviews), this work reflects on the experience developed through work with Mapuche women cultivating home gardens in the region since 2012; it looks to material and symbolic elements of resilience in the local indigenous food systems. Local food systems show indeed indicators of social-ecological resilience. The biocultural memory is expressed in affection to particular flavors and recipes, the cultural importance of seeds and reciprocity networks, as well as an accurate knowledge about the indicators of the seasons and weather, which have allowed local food systems to thrive with a strong cultural foundation. Furthermore, these elements turn into biocultural resistance in the face of the current institutional pressures for rural specialization, processes of cultural assimilation such as agroecosystems and diet homogenization, as well as structural threats towards the diversity and freedom of native seeds. Thus, the resilience-resistance dynamic shown by the social-ecological systems of the southern Andes is daily expressed in the local food systems and flavors and is key for diverse and culturally sound social-ecological health.Keywords: biocultural heritage, indigenous food systems, social-ecological resilience, southern Andes
Procedia PDF Downloads 136621 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors
Authors: Freddy Munzhelele
Abstract:
Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms
Procedia PDF Downloads 61620 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania
Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea
Abstract:
A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality
Procedia PDF Downloads 128619 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building
Authors: Zehra Aybike Kılıç, Alpin Köknel Yener
Abstract:
Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort
Procedia PDF Downloads 176618 Physicochemical Properties of Pea Protein Isolate (PPI)-Starch and Soy Protein Isolate (SPI)-Starch Nanocomplexes Treated by Ultrasound at Different pH Values
Authors: Gulcin Yildiz, Hao Feng
Abstract:
Soybean proteins are the most widely used and researched proteins in the food industry. Due to soy allergies among consumers, however, alternative legume proteins having similar functional properties have been studied in recent years. These alternative proteins are also expected to have a price advantage over soy proteins. One such protein that has shown good potential for food applications is pea protein. Besides the favorable functional properties of pea protein, it also contains fewer anti-nutritional substances than soy protein. However, a comparison of the physicochemical properties of pea protein isolate (PPI)-starch nanocomplexes and soy protein isolate (SPI)-starch nanocomplexes treated by ultrasound has not been well documented. This study was undertaken to investigate the effects of ultrasound treatment on the physicochemical properties of PPI-starch and SPI-starch nanocomplexes. Pea protein isolate (85% pea protein) provided by Roquette (Geneva, IL, USA) and soy protein isolate (SPI, Pro-Fam® 955) obtained from the Archer Daniels Midland Company were adjusted to different pH levels (2-12) and treated with 5 minutes of ultrasonication (100% amplitude) to form complexes with starch. The soluble protein content was determined by the Bradford method using BSA as the standard. The turbidity of the samples was measured using a spectrophotometer (Lambda 1050 UV/VIS/NIR Spectrometer, PerkinElmer, Waltham, MA, USA). The volume-weighted mean diameters (D4, 3) of the soluble proteins were determined by dynamic light scattering (DLS). The emulsifying properties of the proteins were evaluated by the emulsion stability index (ESI) and emulsion activity index (EAI). Both the soy and pea protein isolates showed a U-shaped solubility curve as a function of pH, with a high solubility above the isoelectric point and a low one below it. Increasing the pH from 2 to 12 resulted in increased solubility for both the SPI and PPI-starch complexes. The pea nanocomplexes showed greater solubility than the soy ones. The SPI-starch nanocomplexes showed better emulsifying properties determined by the emulsion stability index (ESI) and emulsion activity index (EAI) due to SPI’s high solubility and high protein content. The PPI had similar or better emulsifying properties at certain pH values than the SPI. The ultrasound treatment significantly decreased the particle sizes of both kinds of nanocomplex. For all pH levels with both proteins, the droplet sizes were found to be lower than 300 nm. The present study clearly demonstrated that applying ultrasonication under different pH conditions significantly improved the solubility and emulsify¬ing properties of the SPI and PPI. The PPI exhibited better solubility and emulsifying properties than the SPI at certain pH levelsKeywords: emulsifying properties, pea protein isolate, soy protein isolate, ultrasonication
Procedia PDF Downloads 319617 Study of the Design and Simulation Work for an Artificial Heart
Authors: Mohammed Eltayeb Salih Elamin
Abstract:
This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump
Procedia PDF Downloads 459616 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers
Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec
Abstract:
Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser
Procedia PDF Downloads 321615 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen
Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez
Abstract:
Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.Keywords: beekeeping, drying, food processing, food safety
Procedia PDF Downloads 104614 Computed Tomography Myocardial Perfusion on a Patient with Hypertrophic Cardiomyopathy
Authors: Jitendra Pratap, Daphne Prybyszcuk, Luke Elliott, Arnold Ng
Abstract:
Introduction: Coronary CT angiography is a non-invasive imaging technique for the assessment of coronary artery disease and has high sensitivity and negative predictive value. However, the correlation between the degree of CT coronary stenosis and the significance of hemodynamic obstruction is poor. The assessment of myocardial perfusion has mostly been undertaken by Nuclear Medicine (SPECT), but it is now possible to perform stress myocardial CT perfusion (CTP) scans quickly and effectively using CT scanners with high temporal resolution. Myocardial CTP is in many ways similar to neuro perfusion imaging technique, where radiopaque iodinated contrast is injected intravenously, transits the pulmonary and cardiac structures, and then perfuses through the coronary arteries into the myocardium. On the Siemens Force CT scanner, a myocardial perfusion scan is performed using a dynamic axial acquisition, where the scanner shuffles in and out every 1-3 seconds (heart rate dependent) to be able to cover the heart in the z plane. This is usually performed over 38 seconds. Report: A CT myocardial perfusion scan can be utilised to complement the findings of a CT Coronary Angiogram. Implementing a CT Myocardial Perfusion study as part of a routine CT Coronary Angiogram procedure provides a ‘One Stop Shop’ for diagnosis of coronary artery disease. This case study demonstrates that although the CT Coronary Angiogram was within normal limits, the perfusion scan provided additional, clinically significant information in regards to the haemodynamics within the myocardium of a patient with Hypertrophic Obstructive Cardio Myopathy (HOCM). This negated the need for further diagnostics studies such as cardiac ECHO or Nuclear Medicine Stress tests. Conclusion: CT coronary angiography with adenosine stress myocardial CTP was utilised in this case to specifically exclude coronary artery disease in conjunction with accessing perfusion within the hypertrophic myocardium. Adenosine stress myocardial CTP demonstrated the reduced myocardial blood flow within the hypertrophic myocardium, but the coronary arteries did not show any obstructive disease. A CT coronary angiogram scan protocol that incorporates myocardial perfusion can provide diagnostic information on the haemodynamic significance of any coronary artery stenosis and has the potential to be a “One Stop Shop” for cardiac imaging.Keywords: CT, cardiac, myocardium, perfusion
Procedia PDF Downloads 132613 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire
Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan
Abstract:
Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer
Procedia PDF Downloads 168612 Development of Immersive Virtual Reality System for Planning of Cargo Loading Operations
Authors: Eugene Y. C. Wong, Daniel Y. W. Mo, Cosmo T. Y. Ng, Jessica K. Y. Chan, Leith K. Y. Chan, Henry Y. K. Lau
Abstract:
The real-time planning visualisation, precise allocation and loading optimisation in air cargo load planning operations are increasingly important as more considerations are needed on dangerous cargo loading, locations of lithium batteries, weight declaration and limited aircraft capacity. The planning of the unit load devices (ULD) can often be carried out only in a limited number of hours before flight departure. A dynamic air cargo load planning system is proposed with the optimisation of cargo load plan and visualisation of planning results in virtual reality systems. The system aims to optimise the cargo load planning and visualise the simulated loading planning decision on air cargo terminal operations. Adopting simulation tools, Cave Automatic Virtual Environment (CAVE) and virtual reality technologies, the results of planning with reference to weight and balance, Unit Load Device (ULD) dimensions, gateway, cargo nature and aircraft capacity are optimised and presented. The virtual reality system facilities planning, operations, education and training. Staff in terminals are usually trained in a traditional push-approach demonstration with enormous manual paperwork. With the support of newly customized immersive visualization environment, users can master the complex air cargo load planning techniques in a problem based training with the instant result being immersively visualised. The virtual reality system is developed with three-dimensional (3D) projectors, screens, workstations, truss system, 3D glasses, and demonstration platform and software. The content will be focused on the cargo planning and loading operations in an air cargo terminal. The system can assist decision-making process during cargo load planning in the complex operations of air cargo terminal operations. The processes of cargo loading, cargo build-up, security screening, and system monitoring can be further visualised. Scenarios are designed to support and demonstrate the daily operations of the air cargo terminal, including dangerous goods, pets and animals, and some special cargos.Keywords: air cargo load planning, optimisation, virtual reality, weight and balance, unit load device
Procedia PDF Downloads 345611 Sustainable Management of Gastronomy Experiences as a Mechanism to Promote the Local Economy
Authors: Marianys Fernandez
Abstract:
Gastronomic experiences generate a positive impact on the dynamization of the economy when they are managed in a sustainable manner, given that they value the identity of the destination, strengthen cooperation between stakeholders in the sector, contribute to the preservation of gastronomic heritage, and encourage the implementation of competitive and sustainable public policies. Having as its main aim the analysis of sustainable management of gastronomic experiences, this study analyses different elements associated with the promotion of the local economy. For this purpose, a systematic literature review was carried out to identify, select, synthesise, and evaluate the studies that respond to the research objectives in order to select more reliable articles for research and reduce the potential for bias within the review of literature. To obtain reliable, updated and relevant sources for scientific research, the Web of Science and Scopus databases were used, taking into account the following key words: (1) experiential tourism, (2) gastronomy experience, (3) sustainable destination management, (4) sustainable gastronomy, (5) sustainable economy, in which we obtained a final list of 76 articles. The analysis of the literature allowed us to identify the most pertinent elements referring to the objective of the study: (a) need for competitive policies in the gastronomic sector to promote sustainable local economic development, (b) incentive for cooperation between stakeholders in the gastronomic sector, to guarantee the competitiveness of the destination, (c) propose sustainable standards in the gastronomic tourism sector that link the local economy. Gastronomic experiences constitute a dynamic element of the local economy and promote sustainable tourism. We can highlight that sustainability is a mechanism for the preservation of regional identity in the gastronomic sector through the valuation of the attributes of gastronomy, promotion of the local economy, strengthening of strategic alliances between the stakeholders of the gastronomic sector and its relevant contribution to the competitiveness of the destination. The theoretical implications of the study are focused on suggesting planning, management, and policy criteria to promote the sustainable management of gastronomic experiences in order to promote the local economy. In the practical context, research integrates different approaches, tools, and methods to encourage the active participation of local actors in the promotion of the local economy through the sustainable management of gastronomic tourism.Keywords: experiential tourism, gastronomy experience, sustainable destination management, sustainable economy, sustainable gastronomy
Procedia PDF Downloads 74610 Cultural Heritage in Rural Areas: Added Value for Agro-Tourism Development
Authors: Djurdjica Perovic, Sanja Pekovic, Tatjana Stanovcic, Jovana Vukcevic
Abstract:
Tourism development in rural areas calls for a discussion of strategies that would attract more tourists. Several scholars argue that rural areas may become more attractive to tourists by leveraging their cultural heritage. The present paper explores the development of sustainable heritage tourism practices in transitional societies of the Western Balkans, specifically targeting Montenegrin rural areas. It addresses the sustainable tourism as a shift in business paradigm, enhancing the centrality of the host community, fostering the encounters with local culture, customs and heritage and minimizing the environmental and social impact. Disseminating part of the results of the interdisciplinary KATUN project, the paper explores the diversification of economic activities related to the cultural heritage of katuns (temporary settlements in Montenegrin mountainous regions where the agricultural households stay with livestock during the summer season) through sustainable agro-tourism. It addresses the role of heritage tourism in creating more dynamic economy of under-developed mountain areas, new employment opportunities, sources of income for the local community and more balanced regional development, all based on the principle of sustainability. Based on the substantial field research (including interviews with over 50 households and tourists, as well as the number of stakeholders such as relevant Ministries, business communities and media representatives), the paper analyses the strategies employed in raising the awareness and katun-sensitivity of both national and international tourists and stimulating their interest in sustainable agriculture, rural tourism and cultural heritage of Montenegrin mountain regions. Studying the phenomena of responsible tourism and tourists’ consumerist consciousness in Montenegro through development of katuns should allow evaluating stages of sustainability and cultural heritage awareness, closely intertwined with the EU integration processes in the country. Offering deeper insight at the relationship between rural tourism, sustainable agriculture and cultural heritage, the paper aims to understand if cultural heritage of the area is valuable for agro-tourism development and in which context.Keywords: heritage tourism, sustainable tourism, added value, Montenegro
Procedia PDF Downloads 329609 Understanding the Factors Influencing Urban Ethiopian Consumers’ Consumption Intention of Spirulina-Supplemented Bread
Authors: Adino Andaregie, Isao Takagi, Hirohisa Shimura, Mitsuko Chikasada, Shinjiro Sato, Solomon Addisu
Abstract:
Context: The prevalence of undernutrition in developing countries like Ethiopia has become a significant issue. In this regard, finding alternative nutritional supplements seems to be a practical solution. Spirulina, a highly nutritious microalgae, offers a valuable option as it is a rich source of various essential nutrients. The study aimed to establish the factors affecting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread. Research Aim: The primary purpose of this research is to identify the behavioral and socioeconomic factors impacting the intention of urban Ethiopian consumers to eat Spirulina-fortified bread. Methodology: The research utilized a quantitative approach wherein a structured questionnaire was created and distributed among 361 urban consumers via an online platform. The theory of planned behavior (TPB) was used as a conceptual framework, and confirmatory factor analysis (CFA) and structural equation modelling (SEM) were employed for data analysis. Findings: The study results revealed that attitude towards the supplement, subjective norms, and perceived behavioral control were the critical factors influencing the consumption intention of Spirulina-fortified bread. Moreover, age, physical exercise, and prior knowledge of Spirulina as a food ingredient were also found to have a significant influence. Theoretical Importance: The study contributes towards the understanding of consumer behavior and factors affecting the purchase intentions of Spirulina-fortified bread in urban Ethiopia. The use of TPB as a theoretical framework adds a vital aspect to the study as it provides helpful insights into the factors affecting intentions towards this functional food. Data Collection and Analysis Procedures: The data collection process involved the creation of a structured questionnaire, which was distributed online to urban Ethiopian consumers. Once data was collected, CFA and SEM were utilized to analyze the data and identify the factors impacting consumer behavior. Questions Addressed: The study aimed to address the following questions: (1) What are the behavioral and socioeconomic factors impacting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread? (2) To what extent do attitude towards the supplement, subjective norms, and perceived behavioral control affect the purchase intention of Spirulina-fortified bread? (3) What role does age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient play in the purchase intention of Spirulina-fortified bread among urban Ethiopian consumers? Conclusion: The study concludes that attitude towards the supplement, subjective norms, and perceived behavioral control are significant factors influencing urban Ethiopian consumers’ consumption intention of Spirulina-fortified bread. Moreover, age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient also play a significant role in determining purchase intentions. The findings provide valuable insights for developing effective marketing strategies for Spirulina-fortified functional foods targeted at different consumer segments.Keywords: spirulina, consumption, factors, intention, consumers, behavior
Procedia PDF Downloads 82608 Rheometer Enabled Study of Tissue/biomaterial Frequency-Dependent Properties
Authors: Polina Prokopovich
Abstract:
Despite the well-established dependence of cartilage mechanical properties on the frequency of the applied load, most research in the field is carried out in either load-free or constant load conditions because of the complexity of the equipment required for the determination of time-dependent properties. These simpler analyses provide a limited representation of cartilage properties thus greatly reducing the impact of the information gathered hindering the understanding of the mechanisms involved in this tissue replacement, development and pathology. More complex techniques could represent better investigative methods, but their uptake in cartilage research is limited by the highly specialised training required and cost of the equipment. There is, therefore, a clear need for alternative experimental approaches to cartilage testing to be deployed in research and clinical settings using more user-friendly and financial accessible devices. Frequency dependent material properties can be determined through rheometry that is an easy to use requiring a relatively inexpensive device; we present how a commercial rheometer can be adapted to determine the viscoelastic properties of articular cartilage. Frequency-sweep tests were run at various applied normal loads on immature, mature and trypsinased (as model of osteoarthritis) cartilage samples to determine the dynamic shear moduli (G*, G′ G″) of the tissues. Moduli increased with increasing frequency and applied load; mature cartilage had generally the highest moduli and GAG depleted samples the lowest. Hydraulic permeability (KH) was estimated from the rheological data and decreased with applied load; GAG depleted cartilage exhibited higher hydraulic permeability than either immature or mature tissues. The rheometer-based methodology developed was validated by the close comparison of the rheometer-obtained cartilage characteristics (G*, G′, G″, KH) with results obtained with more complex testing techniques available in literature. Rheometry is relatively simpler and does not require highly capital intensive machinery and staff training is more accessible; thus the use of a rheometer would represent a cost-effective approach for the determination of frequency-dependent properties of cartilage for more comprehensive and impactful results for both healthcare professional and R&D.Keywords: tissue, rheometer, biomaterial, cartilage
Procedia PDF Downloads 81607 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland
Authors: Patrick Weber, Daniel Gredig
Abstract:
Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.Keywords: discrimination, high school students, gay men, predictors, Switzerland
Procedia PDF Downloads 329606 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems
Authors: Bronwen Wade
Abstract:
Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality
Procedia PDF Downloads 54605 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction
Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer
Abstract:
History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19
Procedia PDF Downloads 174604 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 195603 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297602 Investigation of Turbulent Flow in a Bubble Column Photobioreactor and Consequent Effects on Microalgae Cultivation Using Computational Fluid Dynamic Simulation
Authors: Geetanjali Yadav, Arpit Mishra, Parthsarathi Ghosh, Ramkrishna Sen
Abstract:
The world is facing problems of increasing global CO2 emissions, climate change and fuel crisis. Therefore, several renewable and sustainable energy alternatives should be investigated to replace non-renewable fuels in future. Algae presents itself a versatile feedstock for the production of variety of fuels (biodiesel, bioethanol, bio-hydrogen etc.) and high value compounds for food, fodder, cosmetics and pharmaceuticals. Microalgae are simple microorganisms that require water, light, CO2 and nutrients for growth by the process of photosynthesis and can grow in extreme environments, utilize waste gas (flue gas) and waste waters. Mixing, however, is a crucial parameter within the culture system for the uniform distribution of light, nutrients and gaseous exchange in addition to preventing settling/sedimentation, creation of dark zones etc. The overarching goal of the present study is to improve photobioreactor (PBR) design for enhancing dissolution of CO2 from ambient air (0.039%, v/v), pure CO2 and coal-fired flue gas (10 ± 2%) into microalgal PBRs. Computational fluid dynamics (CFD), a state-of-the-art technique has been used to solve partial differential equations with turbulence closure which represents the dynamics of fluid in a photobioreactor. In this paper, the hydrodynamic performance of the PBR has been characterized and compared with that of the conventional bubble column PBR using CFD. Parameters such as flow rate (Q), mean velocity (u), mean turbulent kinetic energy (TKE) were characterized for each experiment that was tested across different aeration schemes. The results showed that the modified PBR design had superior liquid circulation properties and gas-liquid transfer that resulted in creation of uniform environment inside PBR as compared to conventional bubble column PBR. The CFD technique has shown to be promising to successfully design and paves path for a future research in order to develop PBRs which can be commercially available for scale-up microalgal production.Keywords: computational fluid dynamics, microalgae, bubble column photbioreactor, flue gas, simulation
Procedia PDF Downloads 231601 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand
Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk
Abstract:
Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment
Procedia PDF Downloads 129600 Urban Meetings: Graphic Analysis of the Public Space in a Cultural Building from São Paulo
Authors: Thalita Carvalho Martins de Castro, Núbia Bernardi
Abstract:
Currently, studies evidence that our cities are portraits of social relations. In the midst of so many segregations, cultural buildings emerge as a place to assemble collective activities and expressions. Through theater, exhibitions, educational workshops, libraries, the architecture approaches human relations and seeks to propose meeting places. The purpose of this research is to deepen the discussions about the contributions of cultural buildings in the use of the spaces of the contemporary city, based on the data and measure collected in the master's research in progress. The graphic analysis of the insertion of contemporary cultural buildings seeks to highlight the social use of space. The urban insertions of contemporary cultural buildings in the city of São Paulo (Brazil) will be analyzed to understand the relations between the architectural form and its audience. The collected data describe a dynamic of flows and the permanence in the use of these spaces, indicating the contribution of the cultural buildings, associated with artistic production, in the dynamics of urban spaces and the social modifications of their milieu. Among the case studies, the research in development is based on the registration and graphic analysis of the Praça das Artes (2012) building located in the historical central region of the city, which after a long period of great degradation undergoes a current redevelopment. The choice of this building was based on four parameters, both on the architectural scale and on the urban scale: urban insertion, local impact, cultural production and a mix of uses. For the analysis will be applied two methodologies of graphic analysis, one with diagrams accompanied by texts and another with the active analysis for open space projects using complementary graphic methodologies, with maps, plants, info-graphics, perspectives, time-lapse videos and analytical tables. This research aims to reinforce the debates between the methodologies of form-use spaces and visual synthesis applied in cultural buildings, in order that new projects can structure public spaces as catalysts for social use, generating improvements in the daily life of its users and in the cities where they are inserted.Keywords: cultural buildings, design methodologies, graphic analysis, public spaces
Procedia PDF Downloads 306599 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model
Authors: Tanu Khanuja, Harikrishnan N. Unni
Abstract:
Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress
Procedia PDF Downloads 160