Search results for: monotonically decreasing parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3168

Search results for: monotonically decreasing parameter

678 Microstructural Evolution of Maraging Steels from Powder Particles to Additively Manufactured Samples

Authors: Seyedamirreza Shamsdini, Mohsen Mohammadi

Abstract:

In this research, 18Ni-300 maraging steel powder particles are investigated by studying particle size distribution along with their morphology and grain structure. The powder analysis shows mostly spherical morphologies with cellular structures. A laser-based additive manufacturing process, selective laser melting (SLM) is used to produce samples for further investigation of mechanical properties and microstructure. Several uniaxial tensile tests are performed on the as-built parts to evaluate the mechanical properties. The macroscopic properties, as well as microscopic studies, are then investigated on the printed parts. Hardness measurements, as well as porosity levels, are measured for each sample and are correlated with microstructures through electron microscopy techniques such as Scanning Electron Microscopy (SEM) and Transmission Electron Microscopy (TEM). The grain structure is studied for the as-printed specimens and compared to the powder particle microstructure. The cellular structure of the printed samples is observed to have dendritic forms with dendrite width dimensions similar to the powder particle cells. The process parameter is changed, and the study is performed for different powder layer thickness, and the resultant mechanical properties and grain structure are shown to be similar. A phase study is conducted both on the powder and the printed samples using X-Ray Diffraction (XRD) techniques, and the austenite phase is observed to at first decrease due to the manufacturing process and again during the uniaxial tensile deformation. The martensitic structure is formed in the first stage based on the heating cycles of the manufacturing process and the remaining austenite is shown to be transformed to martensite due to different deformation mechanisms.

Keywords: additive manufacturing, maraging steel, mechanical properties, microstructure

Procedia PDF Downloads 144
677 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities

Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.

Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings

Procedia PDF Downloads 78
676 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 51
675 Field-observed Thermal Fractures during Reinjection and Its Numerical Simulation

Authors: Wen Luo, Phil J. Vardon, Anne-Catherine Dieudonne

Abstract:

One key process that partly controls the success of geothermal projects is fluid reinjection, which benefits in dealing with waste water, maintaining reservoir pressure, and supplying heat-exchange media, etc. Thus, sustaining the injectivity is of great importance for the efficiency and sustainability of geothermal production. However, the injectivity is sensitive to the reinjection process. Field experiences have illustrated that the injectivity can be damaged or improved. In this paper, the focus is on how the injectivity is improved. Since the injection pressure is far below the formation fracture pressure, hydraulic fracturing cannot be the mechanism contributing to the increase in injectivity. Instead, thermal stimulation has been identified as the main contributor to improving the injectivity. For low-enthalpy geothermal reservoirs, which are not fracture-controlled, thermal fracturing, instead of thermal shearing, is expected to be the mechanism for increasing injectivity. In this paper, field data from the sedimentary low-enthalpy geothermal reservoirs in the Netherlands were analysed to show the occurrence of thermal fracturing due to the cooling shock during reinjection. Injection data were collected and compared to show the effects of the thermal fractures on injectivity. Then, a thermo-hydro-mechanical (THM) model for the near field formation was developed and solved by finite element method to simulate the observed thermal fractures. It was then compared with the HM model, decomposed from the THM model, to illustrate the thermal effects on thermal fracturing. Finally, the effects of operational parameters, i.e. injection temperature and pressure, on the changes in injectivity were studied on the basis of the THM model. The field data analysis and simulation results illustrate that the thermal fracturing occurred during reinjection and contributed to the increase in injectivity. The injection temperature was identified as a key parameter that contributes to thermal fracturing.

Keywords: injectivity, reinjection, thermal fracturing, thermo-hydro-mechanical model

Procedia PDF Downloads 203
674 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 215
673 Micro Plasma an Emerging Technology to Eradicate Pesticides from Food Surface

Authors: Muhammad Saiful Islam Khan, Yun Ji Kim

Abstract:

Organophosphorus pesticides (OPPs) have been widely used to replace more persistent organochlorine pesticides because OPPs are more soluble in water and decompose rapidly in aquatic systems. Extensive uses of OPPs in modern agriculture are the major cause of the contamination of surface water. Regardless of the advantages gained by the application of pesticides in modern agriculture, they are a threat to the public health environment. With the aim of reducing possible health threats, several physical and chemical treatment processes have been studied to eliminate biological and chemical poisons from food stuff. In the present study, a micro-plasma device was used to reduce pesticides from the surface of food stuff. Pesticide free food items chosen in this study were perilla leaf, tomato, broccoli and blueberry. To evaluate the removal efficiency of pesticides, different washing methods were followed such as soaking with water, washing with bubbling water, washing with plasma-treated water and washing with chlorine water. 2 mL of 2000 ppm pesticide samples, namely, diazinone and chlorpyrifos were individuality inoculated on food surface and was air dried for 2 hours before treated with plasma. Plasma treated water was used in two different manners one is plasma treated water with bubbling the other one is aerosolized plasma treated water. The removal efficiency of pesticides from food surface was studied using HPLC. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows minimum 72% to maximum 87 % reduction for 4 min treatment irrespective to the types of food items and the types of pesticides sample, in case of soaking and bubbling the reduction is 8% to 48%. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows somewhat similar reduction ability which is significantly higher comparing to the soaking and bubbling washing system. The temperature effect of the washing systems was also evaluated; three different temperatures were set for the experiment, such as 22°C, 10°C and 4°C. Decreasing temperature from 22°C to 10°C shows a higher reduction in the case of washing with plasma and aerosolized plasma treated water, whereas an opposite trend was observed for the washing with chlorine water. Further temperature reduction from 10°C to 4°C does not show any significant reduction of pesticides, except for the washing with chlorine water. Chlorine water treatment shows lesser pesticide reduction with the decrease in temperature. The color changes of the treated sample were measured immediately and after one week to evaluate if there is any effect of washing with plasma treated water and with chlorine water. No significant color changes were observed for either of the washing systems, except for broccoli washing with chlorine water.

Keywords: chlorpyrifos, diazinone, pesticides, micro plasma

Procedia PDF Downloads 169
672 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.

Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness

Procedia PDF Downloads 401
671 Infusion Pump Historical Development, Measurement and Parts of Infusion Pump

Authors: Samuel Asrat

Abstract:

Infusion pumps have become indispensable tools in modern healthcare, allowing for precise and controlled delivery of fluids, medications, and nutrients to patients. This paper provides an overview of the historical development, measurement, and parts of infusion pumps. The historical development of infusion pumps can be traced back to the early 1960s when the first rudimentary models were introduced. These early pumps were large, cumbersome, and often unreliable. However, advancements in technology and engineering over the years have led to the development of smaller, more accurate, and user-friendly infusion pumps. Measurement of infusion pumps involves assessing various parameters such as flow rate, volume delivered, and infusion duration. Flow rate, typically measured in milliliters per hour (mL/hr), is a critical parameter that determines the rate at which fluids or medications are delivered to the patient. Accurate measurement of flow rate is essential to ensure the proper administration of therapy and prevent adverse effects. Infusion pumps consist of several key parts, including the pump mechanism, fluid reservoir, tubing, and control interface. The pump mechanism is responsible for generating the necessary pressure to push fluids through the tubing and into the patient's bloodstream. The fluid reservoir holds the medication or solution to be infused, while the tubing serves as the conduit through which the fluid travels from the reservoir to the patient. The control interface allows healthcare providers to program and adjust the infusion parameters, such as flow rate and volume. In conclusion, infusion pumps have evolved significantly since their inception, offering healthcare providers unprecedented control and precision in delivering fluids and medications to patients. Understanding the historical development, measurement, and parts of infusion pumps is essential for ensuring their safe and effective use in clinical practice.

Keywords: dip, ip, sp, is

Procedia PDF Downloads 44
670 Influence of Convective Boundary Condition on Chemically Reacting Micropolar Fluid Flow over a Truncated Cone Embedded in Porous Medium

Authors: Pradeepa Teegala, Ramreddy Chitteti

Abstract:

This article analyzes the mixed convection flow of chemically reacting micropolar fluid over a truncated cone embedded in non-Darcy porous medium with convective boundary condition. In addition, heat generation/absorption and Joule heating effects are taken into consideration. The similarity solution does not exist for this complex fluid flow problem, and hence non-similarity transformations are used to convert the governing fluid flow equations along with related boundary conditions into a set of nondimensional partial differential equations. Many authors have been applied the spectral quasi-linearization method to solve the ordinary differential equations, but here the resulting nonlinear partial differential equations are solved for non-similarity solution by using a recently developed method called the spectral quasi-linearization method (SQLM). Comparison with previously published work on special cases of the problem is performed and found to be in excellent agreement. The effect of pertinent parameters namely, Biot number, mixed convection parameter, heat generation/absorption, Joule heating, Forchheimer number, chemical reaction, micropolar and magnetic field on physical quantities of the flow are displayed through graphs and the salient features are explored in detail. Further, the results are analyzed by comparing with two special cases, namely, vertical plate and full cone wherever possible.

Keywords: chemical reaction, convective boundary condition, joule heating, micropolar fluid, mixed convection, spectral quasi-linearization method

Procedia PDF Downloads 266
669 Impact of Non-Parental Early Childhood Education on Digital Friendship Tendency

Authors: Sheel Chakraborty

Abstract:

Modern society in developed countries has distanced itself from the earlier norm of joint family living, and with the increase of economic pressure, parents' availability for their children during their infant years has been consistently decreasing over the past three decades. During the same time, the pre-primary education system - built mainly on the developmental psychology theory framework of Jean Piaget and Lev Vygotsky, has been promoted in the US through the legislature and funding. Early care and education may have a positive impact on young minds, but a growing number of kids facing social challenges in making friendships in their teenage years raises serious concerns about its effectiveness. The survey-based primary research presented here shows a statistically significant number of millennials between the ages of 10 and 25 prefer to build friendships virtually than face-to-face interactions. Moreover, many teenagers depend more on their virtual friends whom they never met. Contrary to the belief that early social interactions in a non-home setup make the kids confident and more prepared for the real world, many shy-natured kids seem to develop a sense of shakiness in forming social relationships, resulting in loneliness by the time they are young adults. Reflecting on George Mead’s theory of self that is made up of “I” and “Me”, most functioning homes provide the required freedom and forgivable, congenial environment for building the "I" of a toddler; however, daycare or preschools can barely match that. It seems social images created from the expectations perceived by preschoolers “Me" in a non-home setting may interfere and greatly overpower the formation of a confident "I" thus creating a crisis around the inability to form friendships face to face when they grow older. Though the pervasive nature of social media can’t be ignored, the non-parental early care and education practices adopted largely by the urban population have created a favorable platform of teen psychology on which social media popularity thrived, especially providing refuge to shy Gen-Z teenagers. This can explain why young adults today perceive social media as their preferred outlet of expression and a place to form dependable friendships, despite the risk of being cyberbullied.

Keywords: digital socialization, shyness, developmental psychology, friendship, early education

Procedia PDF Downloads 109
668 Theoretical Analysis and Design Consideration of Screened Heat Pipes for Low-Medium Concentration Solar Receivers

Authors: Davoud Jafari, Paolo Di Marco, Alessandro Franco, Sauro Filippeschi

Abstract:

This paper summarizes the results of an investigation into the heat pipe heat transfer for solar collector applications. The study aims to show the feasibility of a concentrating solar collector, which is coupled with a heat pipe. Particular emphasis is placed on the capillary and boiling limits in capillary porous structures, with different mesh numbers and wick thicknesses. A mathematical model of a cylindrical heat pipe is applied to study its behaviour when it is exposed to higher heat input at the evaporator. The steady state analytical model includes two-dimensional heat conduction in the HP’s wall, the liquid flow in the wick and vapor hydrodynamics. A sensitivity analysis was conducted by considering different design criteria and working conditions. Different wicks (mesh 50, 100, 150, 200, 250, and, 300), different porosities (0.5, 0.6, 0.7, 0.8, and 0.9) with different wick thicknesses (0.25, 0.5, 1, 1.5, and 2 mm) are analyzed with water as a working fluid. Results show that it is possible to improve heat transfer capability (HTC) of a HP by selecting the appropriate wick thickness, the effective pore radius, and lengths for a given HP configuration, and there exist optimal design criteria (optimal thick, evaporator adiabatic and condenser sections). It is shown that the boiling and wicking limits are connected and occurs in dependence on each other. As different parts of the HP external surface collect different fractions of the total incoming insolation, the analysis of non-uniform heat flux distribution indicates that peak heat flux is not affecting parameter. The parametric investigations are aimed to determine working limits and thermal performance of HP for medium temperature SC application.

Keywords: screened heat pipes, analytical model, boiling and capillary limits, concentrating collector

Procedia PDF Downloads 542
667 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube

Authors: Cathal Merz, Gareth O’Donnell

Abstract:

Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.

Keywords: neurovascular catheter, coil reinforced tube, buckling, three-point bend, tensile

Procedia PDF Downloads 102
666 Solid Angle Approach to Quantify the Shape of Daughter Cavity in Drying Nano Colloidal Sessile Droplets

Authors: Rishabh Hans, Saksham Sharma

Abstract:

Drying of a sessile droplet imbibed with colloidal solution is a complex process in many aspects. Till now, most of the work revolves around; conditions for buckling onset, post-buckling effects, nature of change of droplet shape etc. In this work, we are determining the shape of daughter cavity (DC) formed during post-buckling onset, a less explored stage, and its relationship with experimental parameters. We have introduced solid angle as a special parameter that can quantify the shape of DC at any instant. It facilitates us to compare the shape while experimenting across different substrate types, droplet sizes and particle concentration. Furthermore, the angular location of ‘weak spot’ on the periphery of droplet, which marks the initiation of cavity growth, varies in different conditions. To solve this problem, we have evaluated the deflection angle of weak spots w.r.t. the vertical axis going through the middle of droplet. Subsequently, the solid angle subtended by DC is analyzed about that inclined axis. Finally, results of analysis allude that increasing colloidal concentration has inverse effect on the growth rate of cavity’s shape. Moreover, the cap radius of DC is observed lower for high PLR which makes the capillary pressure higher and thus tougher to expedite cavity formation relatively. This analysis can be helpful in further studies to relate the shape, deflection angle, growth rate of daughter cavity to the type of droplet crust formed in the end. Examining DC stage shall add another layer to nano-colloidal research which aims to influence many industrial applications like patterning, coatings, drug delivery, food processing etc.

Keywords: buckling of sessile droplets, daughter cavity, droplet evaporation, nanoporous shell formation, solid angle

Procedia PDF Downloads 262
665 A Strategic Water and Energy Project as a Climate Change Adaptation Tool for Israel, Jordan and the Middle East

Authors: Doron Markel

Abstract:

Water availability in most of the Middle East (especially in Jordan) is among the lowest in the world and has been even further exacerbated by the regional climatic change and the reduced rainfall. The Araba Valley in Israel is disconnected from the national water system. On the other hand, the Araba Valley, both in Israel and Jordan, is an excellent area for solar energy gaining. The Dead Sea (Israel and Jordan) is a hypersaline lake which its level declines at a rate of more than 1 m/y. The decline stems from the increasing use of all available freshwater resources that discharge into the Dead Sea and decreasing natural precipitation due to climate change in the Middle East. As an adaptation tool for this humanmade and Climate Change results, a comprehensive water-energy and environmental project were suggested: The Red Sea-Dead Sea Conveyance. It is planned to desalinate the Red Sea water, supply the desalinated water to both Israel and Jordan, and convey the desalination brine to the Dead Sea to stabilize its water level. Therefore, the World Bank had led a multi-discipline feasibility study between 2008 and 2013, that had mainly dealt with the mixing of seawater and Dead Sea Water. The possible consequences of such mixing were precipitation and possible suspension of secondary Gypsum, as well as blooming of Dunaliella red algae. Using a comprehensive hydrodynamic-geochemical model for the Dead Sea, it was predicted that while conveying up to 400 Million Cubic Meters per year of seawater or desalination brine to the Dead Sea, the latter would not be stratified as it was until 1979; hence Gypsum precipitation and algal blooms would be neglecting. Using another hydrodynamic-biological model for the Red Sea, it was predicted the Seawater pump from the Gulf of Eilat would not harm the ecological system of the gulf (including the sensitive coral reef), giving a pump depth of 120-160 m. Based on these studies, a pipeline conveyance was recommended to convey desalination brine to the Dead Sea with the use of a hydropower plant, utilizing the elevation difference of 400 m between the Red Sea and the Dead Sea. The complementary energy would come from solar panels coupled with innovative storage technology, needed to produce a continuous energy production for an appropriate function of the desalination plant. The paper will describe the proposed project as well as the feasibility study results. The possibility to utilize this water-energy-environmental project as a climate change adaptation strategy for both Israel and Jordan will also be discussed.

Keywords: Red Sea, Dead Sea, water supply, hydro-power, Gypsum, algae

Procedia PDF Downloads 100
664 The Quasar 3C 47:Extreme Population B Jetted Source with Double-Peaked Profile

Authors: Shimeles Terefe Mengistue, Paola Marziani, Ascensióndel Olmo, Jaime Perea, Mirjana Pović

Abstract:

The theory that rotating accretion disks are responsible for the broad emission-line profiles in quasars is frequently put forth; however, the presence of accretion disk (AD) in active galactic nuclei (AGN) had limited and indirect observational support. In order to evaluate the extent to which the AD is a source of the broad Balmer lines and high ionization UV lines in radio-loud (RL) AGN, we focused on an extremely jetted RL quasar, 3C 47 that clearly shows a double peaked profile. This work presents its optical spectra and UV observations from the HST/FOS covering the rest-frame spectral range from 2000 to 7000 \AA. The fit of the low ionization lines, Hbeta, Halpha and MgII2800 show profiles that are in very good agreement with a relativistic Keplerian AD model. The profile of the prototypical high ionization lines can also be modeled by the contribution of the AD, with additional components due to outflows and emissions from the innermost part of the narrow line regions (NLRs). A prominent fit of the resulting double peaked profiles were found and very important disk parameters of the disk have been determined using the Hbeta, Halpha and MgII2800 lines: the inner and outer radii (both in units of G/mbh, where mbh is the supermassive black hole), an inclination to the line of sight, the emissivity index and the local broadening parameter. In addition, the accretion parameters, /mbh and /lledd are also determined. This work indicates that the line profile of 3C 47 shows the most convincing direct evidence for the presence of a rotating AD in AGN and the broad, double-peaked profiles originate from this AD that surrounds an /mbh.

Keywords: active galactic nuclei, quasars, emission lines, Double-peaked, supermassive black hole

Procedia PDF Downloads 59
663 Comparative Analysis of the Impact of Urbanization on Land Surface Temperature in the United Arab Emirates

Authors: A. O. Abulibdeh

Abstract:

The aim of this study is to investigate and compare the changes in the Land Surface Temperature (LST) as a function of urbanization, particularly land use/land cover changes, in three cities in the UAE, mainly Abu Dhabi, Dubai, and Al Ain. The scale of this assessment will be at the macro- and micro-levels. At the macro-level, a comparative assessment will take place to compare between the four cities in the UAE. At the micro-level, the study will compare between the effects of different land use/land cover on the LST. This will provide a clear and quantitative city-specific information related to the relationship between urbanization and local spatial intra-urban LST variation in three cities in the UAE. The main objectives of this study are 1) to investigate the development of LST on the macro- and micro-level between and in three cities in the UAE over two decades time period, 2) to examine the impact of different types of land use/land cover on the spatial distribution of LST. Because these three cities are facing harsh arid climate, it is hypothesized that (1) urbanization is affecting and connected to the spatial changes in LST; (2) different land use/land cover have different impact on the LST; and (3) changes in spatial configuration of land use and vegetation concentration over time would control urban microclimate on a city scale and control macroclimate on the country scale. This study will be carried out over a 20-year period (1996-2016) and throughout the whole year. The study will compare between two distinct periods with different thermal characteristics which are the cool/cold period from November to March and warm/hot period between April and October. The best practice research method for this topic is to use remote sensing data to target different aspects of natural and anthropogenic systems impacts. The project will follow classical remote sensing and mapping techniques to investigate the impact of urbanization, mainly changes in land use/land cover, on LST. The investigation in this study will be performed in two stages. Stage one remote sensing data will be used to investigate the impact of urbanization on LST on a macroclimate level where the LST and Urban Heat Island (UHI) will be compared in the three cities using data from the past two decades. Stage two will investigate the impact on microclimate scale by investigating the LST and UHI using a particular land use/land cover type. In both stages, an LST and urban land cover maps will be generated over the study area. The outcome of this study should represent an important contribution to recent urban climate studies, particularly in the UAE. Based on the aim and objectives of this study, the expected outcomes are as follow: i) to determine the increase or decrease of LST as a result of urbanization in these four cities, ii) to determine the effect of different land uses/land covers on increasing or decreasing the LST.

Keywords: land use/land cover, global warming, land surface temperature, remote sensing

Procedia PDF Downloads 238
662 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines

Procedia PDF Downloads 89
661 Contractual Risk Transfer in Islamic Home Financing: Analysis in Bank Malaysia

Authors: Ahmad Dahlan Salleh, Nik Abdul Rahim Nik Abdul Ghani, Muhamad Firdaus M. Hatta

Abstract:

Risk management has implications on pricing, governance arrangements, business practices and strategy. Nowadays, home financing contract offers more in the risk transfer form to increase bank profit. This is parallel with Islamic jurisprudence method al-Kharaj bi al-thaman (gain accompanies liability for loss) and al-ghurm bil ghunm (gain is justified with risk) that determine the matching between risk transfer and returns. Malaysian financing trend is to buy house. Besides, exists transparency lacking risk transfer issues to the clients because of not been informed clearly. Terms and conditions of each financing also do not reflect clearly that the risk has been transferred to the client, justifying a determination price been made. The assumption on risk occurrence is also inaccurate as each risk is different with the type of financing contract. This makes the Islamic Financial Services Act 2013 in providing standards that transparent and consistent can be used by Islamic financial institution less effective. This study examines how far the level of the risk and obligation incurred by bank and client under various Islamic home financing contract. This research is qualitative by using two methods, document analysis, and semi-structured interviews. Document analysis from literature review to identify profile, themes and risk transfer element in home financing from Islamic jurisprudence perspective. This study finds that need to create a risk transfer parameter by banks which are consistent with risk transfer theory according to Islamic jurisprudence. This study has potential to assist the authority in Islamic finance such as The Central Bank of Malaysia (Bank Negara Malaysia) in regulating Islamic banking industry so that the risk transfer valuation in home financing contract based on home financing good practice and determined risk limits.

Keywords: risk transfer, home financing contract, Sharia compliant, Malaysia

Procedia PDF Downloads 406
660 Assessment of Hepatosteatosis Among Diabetic and Nondiabetic Patients Using Biochemical Parameters and Noninvasive Imaging Techniques

Authors: Tugba Sevinc Gamsiz, Emine Koroglu, Ozcan Keskin

Abstract:

Aim: Nonalcoholic fatty liver disease (NAFLD) is considered the most common chronic liver disease in the general population. The higher mortality and morbidity among NAFLD patients and lack of symptoms makes early detection and management important. In our study, we aimed to evaluate the relationship between noninvasive imaging and biochemical markers in diabetic and nondiabetic patients diagnosed with NAFLD. Materials and Methods: The study was conducted from (September 2017) to (December 2017) on adults admitted to Internal Medicine and Gastroenterology outpatient clinics with hepatic steatosis reported on ultrasound or transient elastography within the last six months that exclude patients with other liver diseases or alcohol abuse. The data were collected and analyzed retrospectively. Number cruncher statistical system (NCSS) 2007 program was used for statistical analysis. Results: 116 patients were included in this study. Diabetic patients compared to nondiabetics had significantly higher Controlled Attenuation Parameter (CAP), Liver Stiffness Measurement (LSM) and fibrosis values. Also, hypertension, hepatomegaly, high BMI, hypertriglyceridemia, hyperglycemia, high A1c, and hyperuricemia were found to be risk factors for NAFLD progression to fibrosis. Advanced fibrosis (F3, F4) was present in 18,6 % of all our patients; 35,8 % of diabetic and 5,7 % of nondiabetic patients diagnosed with hepatic steatosis. Conclusion: Transient elastography is now used in daily clinical practice as an accurate noninvasive tool during follow-up of patients with fatty liver. Early diagnosis of the stage of liver fibrosis improves the monitoring and management of patients, especially in those with metabolic syndrome criteria.

Keywords: diabetes, elastography, fatty liver, fibrosis, metabolic syndrome

Procedia PDF Downloads 133
659 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm

Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar

Abstract:

The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations

Procedia PDF Downloads 403
658 The Use of Respiratory Index of Severity in Children (RISC) for Predicting Clinical Outcomes for 3 Months-59 Months Old Patients Hospitalized with Community-Acquired Pneumonia in Visayas Community Medical Center, Cebu City from January 2013 - June 2

Authors: Karl Owen L. Suan, Juliet Marie S. Lambayan, Floramay P. Salo-Curato

Abstract:

Objective: To predict the outcome among patients admitted with community-acquired pneumonia (ages 3 months to 59 months old) admitted in Visayas Community Medical Center using the Respiratory Index of Severity in Children (RISC). Design: A cross-sectional study design was used. Setting: The study was done in Visayas Community Medical Center, which is a private tertiary level in Cebu City from January-June 2013. Patients/Participants: A total of 72 patients were initially enrolled in the study. However, 1 patient transferred to another institution, thus 71 patients were included in this study. Within 24 hours from admission, patients were assigned a RISC score. Statistical Analysis: Cohen’s kappa coefficient was used for inter-rater agreement for categorical data. This study used frequency and percentage distribution for qualitative data. Mean, standard deviation and range were used for quantitative data. To determine the relationship of each RISC score parameter and the total RISC score with the outcome, a Mann Whitney U Test and 2x2 Fischer Exact test for testing associations were used. A p value less of than 0.05 alpha was considered significant. Results: There was a statistical significance between RISC score and clinical outcome. RISC score of greater than 4 was correlated with intubation and/or mortality. Conclusion: The RISC scoring system is a simple combination of clinical parameters and a reliable tool that will help stratify patients aged 3 months to 59 months in predicting clinical outcome.

Keywords: RISC, clinical outcome, community-acquired pneumonia, patients

Procedia PDF Downloads 284
657 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression

Authors: Wu Peng, Anders Liljerehn, Martin Magnevall

Abstract:

In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.

Keywords: cutting force, kienzle model, predictive model, tool flank wear

Procedia PDF Downloads 96
656 Prevalence of Cyp2d6 and Its Implications for Personalized Medicine in Saudi Arabs

Authors: Hamsa T. Tayeb, Mohammad A. Arafah, Dana M. Bakheet, Duaa M. Khalaf, Agnieszka Tarnoska, Nduna Dzimiri

Abstract:

Background: CYP2D6 is a member of the cytochrome P450 mixed-function oxidase system. The enzyme is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, especially in breast cancer and psychiatric therapy. Different phenotypes have been described displaying alleles that lead to a complete loss of enzyme activity, reduced function (poor metabolizers – PM), hyperfunctionality (ultrarapid metabolizers–UM) and therefore drug intoxication or loss of drug effect. The prevalence of these variants may vary among different ethnic groups. Furthermore, the xTAG system has been developed to categorized all patients into different groups based on their CYP2D6 substrate metabolization. Aim of the study: To determine the prevalence of the different CYP2D6 variants in our population, and to evaluate their clinical relevance in personalized medicine. Methodology: We used the Luminex xMAP genotyping system to sequence 305 Saudi individuals visiting the Blood Bank of our Institution and determine which polymorphisms of CYP2D6 gene are prevalent in our region. Results: xTAG genotyping showed that 36.72% (112 out of 305 individuals) carried the CYP2D6_*2. Out of the 112 individuals with the *2 SNP, 6.23% had multiple copies of *2 SNP (19 individuals out of 305 individuals), resulting in an UM phenotype. About 33.44% carried the CYP2D6_*41, which leads to decreased activity of the CYP2D6 enzyme. 19.67% had the wild-type alleles and thus had normal enzyme function. Furthermore, 15.74% carried the CYP2D6_*4, which is the most common nonfunctional form of the CYP2D6 enzyme worldwide. 6.56% carried the CYP2D6_*17, resulting in decreased enzyme activity. Approximately 5.73% carried the CYP2D6_*10, consequently decreasing the enzyme activity, resulting in a PM phenotype. 2.30% carried the CYP2D6_*29, leading to decreased metabolic activity of the enzyme, and 2.30% carried the CYP2D6_*35, resulting in an UM phenotype, 1.64% had a whole-gene deletion CYP2D6_*5, thus resulting in the loss of CYP2D6 enzyme production, 0.66% carried the CYP2D6_*6 variant. One individual carried the CYP2D6_*3(B), producing an inactive form of the enzyme, which leads to decrease of enzyme activity, resulting in a PM phenotype. Finally, one individual carried the CYP2D6_*9, which decreases the enzyme activity. Conclusions: Our study demonstrates that different CYP2D6 variants are highly prevalent in ethnic Saudi Arabs. This finding sets a basis for informed genotyping for these variants in personalized medicine. The study also suggests that xTAG is an appropriate procedure for genotyping the CYP2D6 variants in personalized medicine.

Keywords: CYP2D6, hormonal breast cancer, pharmacogenetics, polymorphism, psychiatric treatment, Saudi population

Procedia PDF Downloads 562
655 Oxidative Stability of Methyl and Ethyl Microalgae Biodiesel with Synthetic Antioxidants

Authors: Willian L. G. Silva, Fabio R. M. Batista, Matthieu Tubino

Abstract:

Microalgae can be considered a potential source of oil for biodiesel synthesis since this microorganism can grow rapidly in either fresh or salty water, not competing with food production. There are several favorable conditions in Brazil for this type of culture due to the country’s great amount of water. Another very positive aspect of this type of culture is its ability to fix atmospheric CO2, contributing to the reduction of greenhouse gases and their effects on global warming. Despite this biodiesel environmental advantages it degrades resulting in changes in its physical and chemical properties. In this work, the methyl and ethyl microalgae biodiesel oxidative stability was studied in the absence and presence of a synthetic antioxidant. The synthetic antioxidants used were propyl gallate (PG) and tert-butylhydroquinone (TBHQ), at a 0,12% (w/w) concentration. The biodiesel mixture was kept in a sealed glass flask, sheltered from light, and at room temperature (about 25 ºC) for 180 days. During this period, aliquots from this biodiesel were subjected to induced degradation by the Rancimat method, which determines an important quality parameter, provided in the current methods, and is used to monitor the degradation processes that occur in the biodiesel over time. The induction period (IP) expresses the biodiesel oxidative stability. It was stablished that the minimum accepted IP value for biodiesel is 8 hours. The results show that ethylic biodiesel increased its IP value from 7,6 hours to 31 hours when using PG, and to 67 hours when using TBHQ, exceeding the minimum accepted IP value. When the antioxidants were added to the methylic biodiesel samples, the IP was raised to 28 hours when using PG, and to 62 hours when using TBHQ. These values were maintained throughout the entire period of study (180 days). On the other hand, the biodiesel samples without additives maintained an IP above the allowed value for only 30 days. Therefore, in order to preserve microalgae biodiesel for longer periods of time, it is necessary to add antioxidants to both derivatives, i.e., the ethylic and methylic.

Keywords: biodiesel, microalgae, oxidative stability, storage, synthetic antioxidants

Procedia PDF Downloads 449
654 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography

Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon

Abstract:

Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.

Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre

Procedia PDF Downloads 63
653 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin

Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh

Abstract:

The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.

Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati

Procedia PDF Downloads 86
652 Comparison Study of Capital Protection Risk Management Strategies: Constant Proportion Portfolio Insurance versus Volatility Target Based Investment Strategy with a Guarantee

Authors: Olga Biedova, Victoria Steblovskaya, Kai Wallbaum

Abstract:

In the current capital market environment, investors constantly face the challenge of finding a successful and stable investment mechanism. Highly volatile equity markets and extremely low bond returns bring about the demand for sophisticated yet reliable risk management strategies. Investors are looking for risk management solutions to efficiently protect their investments. This study compares a classic Constant Proportion Portfolio Insurance (CPPI) strategy to a Volatility Target portfolio insurance (VTPI). VTPI is an extension of the well-known Option Based Portfolio Insurance (OBPI) to the case where an embedded option is linked not to a pure risky asset such as e.g., S&P 500, but to a Volatility Target (VolTarget) portfolio. VolTarget strategy is a recently emerged rule-based dynamic asset allocation mechanism where the portfolio’s volatility is kept under control. As a result, a typical VTPI strategy allows higher participation rates in the market due to reduced embedded option prices. In addition, controlled volatility levels eliminate the volatility spread in option pricing, one of the frequently cited reasons for OBPI strategy fall behind CPPI. The strategies are compared within the framework of the stochastic dominance theory based on numerical simulations, rather than on the restrictive assumption of the Black-Scholes type dynamics of the underlying asset. An extended comparative quantitative analysis of performances of the above investment strategies in various market scenarios and within a range of input parameter values is presented.

Keywords: CPPI, portfolio insurance, stochastic dominance, volatility target

Procedia PDF Downloads 150
651 Analysis of Factors Influencing the Response Time of an Aspirating Gaseous Agent Concentration Detection Method

Authors: Yu Guan, Song Lu, Wei Yuan, Heping Zhang

Abstract:

Gas fire extinguishing system is widely used due to its cleanliness and efficiency, and since its spray will be affected by many factors such as convection and obstacles in jetting region, so in order to evaluate its effectiveness, detecting concentration distribution in the jetting area is indispensable, which is commonly achieved by aspirating concentration detection technique. During the concentration measurement, the response time of detector is a very important parameter, especially for those fire-extinguishing systems with rapid gas dispersion. Long response time will not only underestimate its concentration but also prolong the change of concentration with time. Therefore it is necessary to analyze the factors influencing the response time. In the paper, an aspirating concentration detection method was introduced, which is achieved by using a small critical nozzle and a laminar flowmeter, and because of the response time is mainly related to the gas transport process from sampling site to the sensor, the effects of exhaust pipe size, gas flow rate, and gas concentration on its response time were analyzed. During the research, Bromotrifluoromethane (CBrF₃) was used. The effect of the sampling tube was investigated with different length of 1, 2, 3, 4 and 5 m (5mm in pipe diameter) and different pipe diameter of 3, 4, 5, 6 and 8 mm (3m in length). The effect of gas flow rate was analyzed by changing the throat diameter of the critical nozzle with 0.5, 0.682, 0.75, 0.8, 0.84 and 0.88 mm. The effect of gas concentration on response time was studied with the concentration range of 0-25%. The result showed that the response time increased with the increase of both the length and diameter of the sampling pipe, and the effect of length on response time was linear, but for the effect of diameter, it was exponential. It was also found that as the throat diameter of critical nozzle increased, the response time reduced a lot, in other words, gas flow rate has a great influence on response time. For the effect of gas concentration, the response time increased with the increase of the CBrF₃ concentration, and the slope of the curve was reduced.

Keywords: aspirating concentration detection, fire extinguishing, gaseous agent, response time

Procedia PDF Downloads 254
650 Effect of Correlation of Random Variables on Structural Reliability Index

Authors: Agnieszka Dudzik

Abstract:

The problem of correlation between random variables in the structural reliability analysis has been extensively discussed in literature on the subject. The cases taken under consideration were usually related to correlation between random variables from one side of ultimate limit state: correlation between particular loads applied on structure or correlation between resistance of particular members of a structure as a system. It has been proved that positive correlation between these random variables reduces the reliability of structure and increases the probability of failure. In the paper, the problem of correlation between random variables from both side of the limit state equation will be taken under consideration. The simplest case where these random variables are of the normal distributions will be concerned. The case when a degree of that correlation is described by the covariance or the coefficient of correlation will be used. Special attention will be paid on questions: how much that correlation changes the reliability level and can it be ignored. In reliability analysis will be used well-known methods for assessment of the failure probability: based on the Hasofer-Lind reliability index and Monte Carlo method adapted to the problem of correlation. The main purpose of this work will be a presentation how correlation of random variables influence on reliability index of steel bar structures. Structural design parameters will be defined as deterministic values and random variables. The latter will be correlated. The criterion of structural failure will be expressed by limit functions related to the ultimate and serviceability limit state. In the description of random variables will be used only for the normal distribution. Sensitivity of reliability index to the random variables will be defined. If the reliability index sensitivity due to the random variable X will be low when compared with other variables, it can be stated that the impact of this variable on failure probability is small. Therefore, in successive computations, it can be treated as a deterministic parameter. Sensitivity analysis leads to simplify the description of the mathematical model, determine the new limit functions and values of the Hasofer-Lind reliability index. In the examples, the NUMPRESS software will be used in the reliability analysis.

Keywords: correlation of random variables, reliability index, sensitivity of reliability index, steel structure

Procedia PDF Downloads 222
649 Cross-Dipole Right-Hand Circularly Polarized UHF/VHF Yagi-Uda Antenna for Satellite Applications

Authors: Shativel S., Chandana B. R., Kavya B. C., Obli B. Vikram, Suganthi J., Nagendra Rao G.

Abstract:

Satellite communication plays a pivotal role in modern global communication networks, serving as a vital link between terrestrial infrastructure and remote regions. The demand for reliable satellite reception systems, especially in UHF (Ultra High Frequency) and VHF (Very High Frequency) bands, has grown significantly over the years. This research paper presents the design and optimization of a high-gain, dual-band crossed Yagi-Uda antenna in CST Studio Suite, specifically tailored for satellite reception. The proposed antenna system incorporates a circularly polarized (Right-Hand Circular Polarization - RHCP) design to reduce Faraday loss. Our aim was to use fewer elements and achieve gain, so the antenna is constructed using 6x2 elements arranged in cross dipole and supported with a boom. We have achieved 10.67dBi at 146MHz and 9.28dBi at 437.5MHz.The process includes parameter optimization and fine-tuning of the Yagi-Uda array’s elements, such as the length and spacing of directors and reflectors, to achieve high gain and desirable radiation patterns. Furthermore, the optimization process considers the requirements for UHF and VHF frequency bands, ensuring broad frequency coverage for satellite reception. The results of this research are anticipated to significantly contribute to the advancement of satellite reception systems, enhancing their capabilities to reliably connect remote and underserved areas to the global communication network. Through innovative antenna design and simulation techniques, this study seeks to provide a foundation for the development of next-generation satellite communication infrastructure.

Keywords: Yagi-Uda antenna, RHCP, gain, UHF antenna, VHF antenna, CST, radiation pattern.

Procedia PDF Downloads 47