Search results for: spinless curves
665 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 238664 Precipitation Kinetics of Al-7%Mg Alloy Studied by DSC and XRD
Authors: M. Fatmi, T. Chihi, M. A. Ghebouli, B. Ghebouli
Abstract:
This work presents the experimental results of the differential scanning calorimetry (DSC), hardness measurements (Hv) and XRD analysis, for order to investigate the kinetics of precipitation phenomena in Al-7%wt. Mg alloy. In the XRD and DSC curves indicates the formation of the intermediate precipitation of β-(Al3Mg2) phase respectively. The activation energies associated with the processes have been determined according to the three models proposed by Kissinger, Ozawa, and Boswell. Consequently, the nucleation mechanism of the precipitates can be explained. These phases are confirmed by XRD analysis.Keywords: discontinuous precipitation, hardening, Al–Mg alloys, mechanical and mechatronics engineering
Procedia PDF Downloads 413663 Design Components and Reliability Aspects of Municipal Waste Water and SEIG Based Micro Hydro Power Plant
Authors: R. K. Saket
Abstract:
This paper presents design aspects and probabilistic approach for generation reliability evaluation of an alternative resource: municipal waste water based micro hydro power generation system. Annual and daily flow duration curves have been obtained for design, installation, development, scientific analysis and reliability evaluation of the MHPP. The hydro potential of the waste water flowing through sewage system of the BHU campus has been determined to produce annual flow duration and daily flow duration curves by ordering the recorded water flows from maximum to minimum values. Design pressure, the roughness of the pipe’s interior surface, method of joining, weight, ease of installation, accessibility to the sewage system, design life, maintenance, weather conditions, availability of material, related cost and likelihood of structural damage have been considered for design of a particular penstock for reliable operation of the MHPP. A MHPGS based on MWW and SEIG is designed, developed, and practically implemented to provide reliable electric energy to suitable load in the campus of the Banaras Hindu University, Varanasi, (UP), India. Generation reliability evaluation of the developed MHPP using Gaussian distribution approach, safety factor concept, peak load consideration and Simpson 1/3rd rule has presented in this paper.Keywords: self excited induction generator, annual and daily flow duration curve, sewage system, municipal waste water, reliability evaluation, Gaussian distribution, Simpson 1/3rd rule
Procedia PDF Downloads 558662 Influence of High Hydrostatic Pressure Application (HHP) and Osmotic Dehydration (DO) as a Pretreatment to Hot –Air Drying of Abalone (Haliotis Rufescens) Cubes
Authors: Teresa Roco, Mario Perez Won, Roberto Lemus-Mondaca, Sebastian Pizarro
Abstract:
This research presents the simultaneous application of high hydrostatic pressure application (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of abalone cubes. The drying time was reduced to 6 hours at 60ºC as compared to the abalone drying by only a 15% NaCl osmotic pretreatment and at an atmospheric pressure that took 10 hours to dry at the same temperature. This was due to the salt and HHP saturation since osmotic pressure increases as water loss increases, thus needing a more reduced time in a convective drying, so water effective diffusion in drying plays an important role in this research. Different working conditions as pressure (350-550 MPa), pressure time ( 5-10 min), salt concentration, NaCl 15% and drying temperature (40-60ºC) will be optimized according to kinetic parameters of each mathematical model (Table 1). The models used for drying experimental curves were those corresponding to Weibull, Logarithmic and Midilli-Kucuk, but the latest one was the best fitted to the experimental data (Figure 1). The values for water effective diffusivity varied from 4.54 – to 9.95x10-9 m2/s for the 8 curves (DO+HHP) whereas the control samples (neither DO nor HHP) varied among 4.35 and 5.60x10-9 m2/s, for 40 and 60°C, respectively and as to drying by osmotic pretreatment at 15% NaCl from 3.804 to 4.36x10-9 m2/s at the same temperatures. Finally as to energy and efficiency consumption values for drying process (control and pretreated samples) it was found that they would be within a range of 777-1815 KJ/Kg and 8.22–19.20% respectively. Therefore, a knowledge concerning the drying kinetic as well as the consumption energy, in addition to knowledge about the quality of abalones subjected to an osmotic pretreatment (DO) and a high hydrostatic pressure (HHP) are extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.Keywords: abalone, convective drying, high pressure hydrostatic, pretreatments, diffusion coefficient
Procedia PDF Downloads 665661 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction
Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum
Abstract:
Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography
Procedia PDF Downloads 325660 Analysis of a Strengthening of a Building Reinforced Concrete Structure
Authors: Nassereddine Attari
Abstract:
Each operation to strengthen or repair requires special consideration and requires the use of methods, tools and techniques appropriate to the situation and specific problems of each of the constructs. The aim of this paper is to study the pathology of building of reinforced concrete towards the earthquake and the vulnerability assessment using a non-linear Pushover analysis and to develop curves for a medium capacity building in order to estimate the damaged condition of the building.Keywords: pushover analysis, earthquake, damage, strengthening
Procedia PDF Downloads 430659 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 493658 Determination of Temperature Dependent Characteristic Material Properties of Commercial Thermoelectric Modules
Authors: Ahmet Koyuncu, Abdullah Berkan Erdogmus, Orkun Dogu, Sinan Uygur
Abstract:
Thermoelectric modules are integrated to electronic components to keep their temperature in specific values in electronic cooling applications. They can be used in different ambient temperatures. The cold side temperatures of thermoelectric modules depend on their hot side temperatures, operation currents, and heat loads. Performance curves of thermoelectric modules are given at most two different hot surface temperatures in product catalogs. Characteristic properties are required to select appropriate thermoelectric modules in thermal design phase of projects. Generally, manufacturers do not provide characteristic material property values of thermoelectric modules to customers for confidentiality. Common commercial software applied like ANSYS ICEPAK, FloEFD, etc., include thermoelectric modules in their libraries. Therefore, they can be easily used to predict the effect of thermoelectric usage in thermal design. Some software requires only the performance values in different temperatures. However, others like ICEPAK require three temperature-dependent equations for material properties (Seebeck coefficient (α), electrical resistivity (β), and thermal conductivity (γ)). Since the number and the variety of thermoelectric modules are limited in this software, definitions of characteristic material properties of thermoelectric modules could be required. In this manuscript, the method of derivation of characteristic material properties from the datasheet of thermoelectric modules is presented. Material characteristics were estimated from two different performance curves by experimentally and numerically in this study. Numerical calculations are accomplished in ICEPAK by using a thermoelectric module exists in the ICEPAK library. A new experimental setup was established to perform experimental study. Because of similar results of numerical and experimental studies, it can be said that proposed equations are approved. This approximation can be suggested for the analysis includes different type or brand of TEC modules.Keywords: electrical resistivity, material characteristics, thermal conductivity, thermoelectric coolers, seebeck coefficient
Procedia PDF Downloads 179657 Fragility Analysis of a Soft First-Story Building in Mexico City
Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana
Abstract:
On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity
Procedia PDF Downloads 178656 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement
Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman
Abstract:
Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.
Procedia PDF Downloads 51655 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 60654 Active Part of the Burnishing Tool Effect on the Physico-Geometric Aspect of the Superficial Layer of 100C6 and 16NC6 Steels
Authors: Tarek Litim, Ouahiba Taamallah
Abstract:
Burnishing is a mechanical surface treatment that combines several beneficial effects on the two steel grades studied. The application of burnishing to the ball or to the tip favors a better roughness compared to turning. In addition, it allows the consolidation of the surface layers through work hardening phenomena. The optimal effects are closely related to the treatment parameters and the active part of the device. With an improvement of 78% on the roughness, burnishing can be defined as a finishing operation in the machining range. With a 44% gain in consolidation rate, this treatment is an effective process for material consolidation. These effects are affected by several factors. The factors V, f, P, r, and i have the most significant effects on both roughness and hardness. Ball or tip burnishing leads to the consolidation of the surface layers of both grades 100C6 and 16NC6 steels by work hardening. For each steel grade and its mechanical treatment, the rational tensile curve has been drawn. Lüdwick's law is used to better plot the work hardening curve. For both grades, a material hardening law is established. For 100C6 steel, these results show a work hardening coefficient and a consolidation rate of 0.513 and 44, respectively, compared to the surface layers processed by turning. When 16NC6 steel is processed, the work hardening coefficient is about 0.29. Hardness tests characterize well the burnished depth. The layer affected by work hardening can reach up to 0.4 mm. Simulation of the tests is of great importance to provide the details at the local scale of the material. Conventional tensile curves provide a satisfactory indication of the toughness of 100C6 and 16NC6 materials. A simulation of the tensile curves revealed good agreement between the experimental and simulation results for both steels.Keywords: 100C6 steel, 16NC6 steel, burnishing, work hardening, roughness, hardness
Procedia PDF Downloads 168653 Alternative Ways to Measure Impacts of Dam Closure to the Structure of Fish Communities of a Neotropical River
Authors: Ana Carolina Lima, Carlos Sérgio Agostinho, Amadeu M. V. M. Soares, Kieran A. Monaghan
Abstract:
Neotropical freshwaters host some of the most biodiverse ecosystems in the world and are among the most threatened by habitat alterations. The high number of species and lack of basic ecological knowledge provides a major obstacle to understanding the effects of environmental change. We assessed the impact of dam closure on the fish communities of a neotropical river by applying simple descriptions of community organizations: Species Abundance Distribution (SAD) and Abundance Biomass Comparison (ABC) curves. Fish data were collected during three distinct time periods (one year before, one year after and five years after closure), at eight sites located downstream of the dam, in the reservoir and reservoir transition zone and upstream of the regulated flow. Dam closure was associated with changes in the structural and functional organization of fish communities at all sites. Species richness tended to increase immediately after dam closure while evenness decreased. Changes in taxonomic structure were accompanied by a change in the distribution of biomass with the proportionate contribution by smaller individuals significantly increased relative to larger individuals. Five years on, richness had fallen to below pre-closure levels at all sites, while the comparative stability of the transformed habitats was reflected by biomass-abundance distribution patterns that approximated pre-disturbance ratios. Despite initial generality, respective sites demonstrated distinct ecological responses that were related to the environmental characteristics of their transformed habitats. This simplistic analysis provides a sensitive and informative assessment of ecological conditions that highlights the impact to ecosystem process and ecological networks and has particular value in regions where detailed ecological knowledge precludes the application of traditional bioassessment methods.Keywords: ABC curves, SADs, biodiversity, damming, tropical fish
Procedia PDF Downloads 388652 Temperature and Admixtures Effects on the Maturity of Normal and Super Fine Ground Granulated Blast Furnace Slag Mortars for the Precast Concrete Industry
Authors: Matthew Cruickshank, Chaaruchandra Korde, Roger P. West, John Reddy
Abstract:
Precast concrete element exports are growing in importance in Ireland’s concrete industry and with the increased global focus on reducing carbon emissions, the industry is exploring more sustainable alternatives such as using ground granulated blast-furnace slag (GGBS) as a partial replacement of Portland cement. It is well established that GGBS, with low early age strength development, has limited use in precast manufacturing due to the need for early de-moulding, cutting of pre-stressed strands and lifting. In this dichotomy, the effects of temperature, admixture, are explored to try to achieve the required very early age strength. Testing of the strength of mortars is mandated in the European cement standard, so here with 50% GGBS and Super Fine GGBS, with three admixture conditions (none, conventional accelerator, novel accelerator) and two early age curing temperature conditions (20°C and 35°C), standard mortar strengths are measured at six ages (16 hours, 1, 2, 3, 7, 28 days). The present paper will describe the effort towards developing maturity curves to aid in understanding the effect of these accelerating admixtures and GGBS fineness on slag cement mortars, allowing prediction of their strength with time and temperature. This study is of particular importance to the precast industry where concrete temperature can be controlled. For the climatic conditions in Ireland, heating of precast beds for long hours will amount to an additional cost and also contribute to the carbon footprint of the products. When transitioned from mortar to concrete, these maturity curves are expected to play a vital role in predicting the strength of the GGBS concrete at a very early age prior to demoulding.Keywords: accelerating admixture, early age strength, ground granulated blast-furnace slag, GGBS, maturity, precast concrete
Procedia PDF Downloads 157651 Monotone Rational Trigonometric Interpolation
Authors: Uzma Bashir, Jamaludin Md. Ali
Abstract:
This study is concerned with the visualization of monotone data using a piece-wise C1 rational trigonometric interpolating scheme. Four positive shape parameters are incorporated in the structure of rational trigonometric spline. Conditions on two of these parameters are derived to attain the monotonicity of monotone data and other two are left-free. Figures are used widely to exhibit that the proposed scheme produces graphically smooth monotone curves.Keywords: trigonometric splines, monotone data, shape preserving, C1 monotone interpolant
Procedia PDF Downloads 271650 Influence of Stress Relaxation and Hysteresis Effect for Pressure Garment Design
Authors: Chia-Wen Yeh, Ting-Sheng Lin, Chih-Han Chang
Abstract:
Pressure garment has been used to prevent and treat the hypertrophic scars following serious burns since 1970s. The use of pressure garment is believed to hasten the maturation process and decrease the highness of scars. Pressure garment is custom made by reducing circumferential measurement of the patient by 10%~20%, called Reduction Factor. However the exact reducing value used depends on the subjective judgment of the therapist and the feeling of patients throughout the try and error process. The Laplace Law can be applied to calculate the pressure from the dimension of the pressure garment by the circumferential measurements of the patients and the tension profile of the fabrics. The tension profile currently obtained neglects the stress relaxation and hysteresis effect within most elastic fabrics. The purpose of this study was to investigate the influence of the tension attenuation, from stress relaxation and hysteresis effect of the fabrics. Samples of pressure garment were obtained from Sunshine Foundation Organization, a nonprofit organization for burn patients in Taiwan. The wall tension profile of pressure garments were measured on a material testing system. Specimens were extended to 10% of the original length, held for 1 hour for the influence of the stress relaxation effect to take place. Then, specimens were extended to 15% of the original length for 10 seconds, then reduced to 10% to simulate donning movement for the influence of the hysteresis effect to take place. The load history was recorded. The stress relaxation effect is obvious from the load curves. The wall tension is decreased by 8.5%~10% after 60mins of holding. The hysteresis effect is obvious from the load curves. The wall tension is increased slightly, then decreased by 1.5%~2.5% and lower than stress relaxation results after 60mins of holding. The wall tension attenuation of the fabric exists due to stress relaxation and hysteresis effect. The influence of hysteresis is more than stress relaxation. These effect should be considered in order to design and evaluate the pressure of pressure garment more accurately.Keywords: hypertrophic scars, hysteresis, pressure garment, stress relaxation
Procedia PDF Downloads 512649 Calculation of Fractal Dimension and Its Relation to Some Morphometric Characteristics of Iranian Landforms
Authors: Mitra Saberi, Saeideh Fakhari, Amir Karam, Ali Ahmadabadi
Abstract:
Geomorphology is the scientific study of the characteristics of form and shape of the Earth's surface. The existence of types of landforms and their variation is mainly controlled by changes in the shape and position of land and topography. In fact, the interest and application of fractal issues in geomorphology is due to the fact that many geomorphic landforms have fractal structures and their formation and transformation can be explained by mathematical relations. The purpose of this study is to identify and analyze the fractal behavior of landforms of macro geomorphologic regions of Iran, as well as studying and analyzing topographic and landform characteristics based on fractal relationships. In this study, using the Iranian digital elevation model in the form of slopes, coefficients of deposition and alluvial fan, the fractal dimensions of the curves were calculated through the box counting method. The morphometric characteristics of the landforms and their fractal dimension were then calculated for 4criteria (height, slope, profile curvature and planimetric curvature) and indices (maximum, Average, standard deviation) using ArcMap software separately. After investigating their correlation with fractal dimension, two-way regression analysis was performed and the relationship between fractal dimension and morphometric characteristics of landforms was investigated. The results show that the fractal dimension in different pixels size of 30, 90 and 200m, topographic curves of different landform units of Iran including mountain, hill, plateau, plain of Iran, from1.06in alluvial fans to1.17in The mountains are different. Generally, for all pixels of different sizes, the fractal dimension is reduced from mountain to plain. The fractal dimension with the slope criterion and the standard deviation index has the highest correlation coefficient, with the curvature of the profile and the mean index has the lowest correlation coefficient, and as the pixels become larger, the correlation coefficient between the indices and the fractal dimension decreases.Keywords: box counting method, fractal dimension, geomorphology, Iran, landform
Procedia PDF Downloads 83648 Effect of Different Thermomechanical Cycles on Microstructure of AISI 4140 Steel
Authors: L.L. Costa, A. M. G. Brito, S. Khan, L. Schaeffer
Abstract:
Microstructure resulting from the forging process is studied as a function of variables such as temperature, deformation, austenite grain size and cooling rate. The purpose of this work is to study the thermomechanical behavior of DIN 42CrMo4 (AISI 4140) steel maintained at the temperatures of 900°, 1000°, 1100° and 1200°C for the austenization times of 22, 66 and 200 minutes each and subsequently forged. These samples were quenched in water in order to study the austenite grain and to investigate the microstructure instead of quenching the annealed samples after forging they were cooled down naturally in the air. The morphologies and properties of the materials such as hardness; prepared by these two different routes have been compared. In addition to the forging experiments, the numerical simulation using the finite element model (FEM), microhardness profiles and metallography images have been presented. Forging force vs position curves has been compared with metallographic results for each annealing condition. The microstructural phenomena resulting from the hot conformation proved that longer austenization time and higher temperature decrease the forging force in the curves. The complete recrystallization phenomenon (static, dynamic and meta dynamic) was observed at the highest temperature and longest time i.e., the samples austenized for 200 minutes at 1200ºC. However, higher hardness of the quenched samples was obtained when the temperature was 900ºC for 66 minutes. The phases observed in naturally cooled samples were exclusively ferrite and perlite, but the continuous cooling diagram indicates the presence of austenite and bainite. The morphology of the phases of naturally cooled samples has shown that the phase arrangement and the previous austenitic grain size are the reasons to high hardness in obtained samples when temperature were 900ºC and 1100ºC austenization times of 22 and 66 minutes, respectively.Keywords: austenization time, thermomechanical effects, forging process, steel AISI 4140
Procedia PDF Downloads 145647 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 322646 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 366645 Serial Position Curves under Compressively Expanding and Contracting Schedules of Presentation
Authors: Priya Varma, Denis John McKeown
Abstract:
Psychological time, unlike physical time, is believed to be ‘compressive’ in the sense that the mental representations of a series of events may be internally arranged with ever decreasing inter-event spacing (looking back from the most recently encoded event). If this is true, the record within immediate memory of recent events is severely temporally distorted. Although this notion of temporal distortion of the memory record is captured within some theoretical accounts of human forgetting, notably temporal distinctiveness accounts, the way in which the fundamental nature of the distortion underpins memory and forgetting broadly is barely recognised or at least directly investigated. Our intention here was to manipulate the spacing of items for recall in order to ‘reverse’ this supposed natural compression within the encoding of the items. In Experiment 1 three schedules of presentation (expanding, contracting and fixed irregular temporal spacing) were created using logarithmic spacing of the words for both free and serial recall conditions. The results of recall of lists of 7 words showed statistically significant benefits of temporal isolation, and more excitingly the contracting word series (which we may think of as reversing the natural compression within the mental representation of the word list) showed best performance. Experiment 2 tested for effects of active verbal rehearsal in the recall task; this reduced but did not remove the benefits of our temporal scheduling manipulation. Finally, a third experiment used the same design but with Chinese characters as memoranda, in a further attempt to subvert possible verbal maintenance of items. One change to the design here was to introduce a probe item following the sequence of items and record response times to this probe. Together the outcomes of the experiments broadly support the notion of temporal compression within immediate memory.Keywords: memory, serial position curves, temporal isolation, temporal schedules
Procedia PDF Downloads 217644 Determination of Stress-Strain Curve of Duplex Stainless Steel Welds
Authors: Carolina Payares-Asprino
Abstract:
Dual-phase duplex stainless steel comprised of ferrite and austenite has shown high strength and corrosion resistance in many aggressive environments. Joining duplex alloys is challenging due to several embrittling precipitates and metallurgical changes during the welding process. The welding parameters strongly influence the quality of a weld joint. Therefore, it is necessary to quantify the weld bead’s integral properties as a function of welding parameters, especially when part of the weld bead is removed through a machining process due to aesthetic reasons or to couple the elements in the in-service structure. The present study uses the existing stress-strain model to predict the stress-strain curves for duplex stainless-steel welds under different welding conditions. Having mathematical expressions that predict the shape of the stress-strain curve is advantageous since it reduces the experimental work in obtaining the tensile test. In analysis and design, such stress-strain modeling simplifies the time of operations by being integrated into calculation tools, such as the finite element program codes. The elastic zone and the plastic zone of the curve can be defined by specific parameters, generating expressions that simulate the curve with great precision. There are empirical equations that describe the stress-strain curves. However, they only refer to the stress-strain curve for the stainless steel, but not when the material is under the welding process. It is a significant contribution to the applications of duplex stainless steel welds. For this study, a 3x3 matrix with a low, medium, and high level for each of the welding parameters were applied, giving a total of 27 weld bead plates. Two tensile specimens were manufactured from each welded plate, resulting in 54 tensile specimens for testing. When evaluating the four models used to predict the stress-strain curve in the welded specimens, only one model (Rasmussen) presented a good correlation in predicting the strain stress curve.Keywords: duplex stainless steels, modeling, stress-stress curve, tensile test, welding
Procedia PDF Downloads 167643 Biases in Numerically Invariant Joint Signatures
Authors: Reza Aghayan
Abstract:
This paper illustrates that numerically invariant joint signatures suffer biases in the resulting signatures. Next, we classify the arising biases as Bias Type 1 and Bias Type 2 and show how they can be removed.Keywords: Euclidean and affine geometries, differential invariant signature curves, numerically invariant joint signatures, numerical analysis, numerical bias, curve analysis
Procedia PDF Downloads 597642 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions
Authors: Yakubu Adamu
Abstract:
The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol
Procedia PDF Downloads 6641 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India
Authors: Zafar Beg, Kumar Gaurav
Abstract:
The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River
Procedia PDF Downloads 179640 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 253639 Microstructure of Virgin and Aged Asphalts by Small-Angle X-Ray Scattering
Authors: Dong Tang, Yongli Zhao
Abstract:
The study of the microstructure of asphalt is of great importance for the analysis of its macroscopic properties. However, the peculiarities of the chemical composition of the asphalt itself and the limitations of existing direct imaging techniques have caused researchers to face many obstacles in studying the microstructure of asphalt. The advantage of small-angle X-ray scattering (SAXS) is that it allows quantitative determination of the internal structure of opaque materials and is suitable for analyzing the microstructure of materials. Therefore, the SAXS technique was used to study the evolution of microstructures on the nanoscale during asphalt aging. And the reasons for the change in scattering contrast during asphalt aging were also explained with the help of Fourier transform infrared spectroscopy (FTIR). SAXS experimental results show that the SAXS curves of asphalt are similar to the scattering curves of scattering objects with two-level structures. The Porod curve for asphalt shows that there is no obvious interface between the micelles and the surrounding mediums, and there is only a fluctuation of the hot electron density between the two. The Beaucage model fit SAXS patterns shows that the scattering coefficient P of the asphaltene clusters as well as the size of the micelles, gradually increase with the aging of the asphalt. Furthermore, aggregation exists between the micelles of asphalt and becomes more pronounced with increasing aging. During asphalt aging, the electron density difference between the micelles and the surrounding mediums gradually increases, leading to an increase in the scattering contrast of the asphalt. Under long-term aging conditions due to the gradual transition from maltenes to asphaltenes, the electron density difference between the micelles and the surrounding mediums decreases, resulting in a decrease in the scattering contrast of asphalt SAXS. Finally, this paper correlates the macroscopic properties of asphalt with microstructural parameters, and the results show that the high-temperature rutting resistance of asphalt is enhanced and the low-temperature cracking resistance decreases due to the aggregation of micelles and the generation of new micelles. These results are useful for understanding the relationship between changes in microstructure and changes in properties during asphalt aging and provide theoretical guidance for the regeneration of aged asphalt.Keywords: asphalt, Beaucage model, microstructure, SAXS
Procedia PDF Downloads 80638 Effect of Oil Viscosity and Brine Salinity/Viscosity on Water/Oil Relative Permeability and Residual Saturations
Authors: Sami Aboujafar
Abstract:
Oil recovery in petroleum reservoirs is greatly affected by fluid-rock and fluid-fluid interactions. These interactions directly control rock wettability, capillary pressure and relative permeability curves. Laboratory core-floods and centrifuge experiments were conducted on sandstone and carbonate cores to study the effect of low and high brine salinity and viscosity and oil viscosity on residual saturations and relative permeability. Drainage and imbibition relative permeability in two phase system were measured, refined lab oils with different viscosities, heavy and light, and several brine salinities were used. Sensitivity analysis with different values for the salinity and viscosity of the fluids,, oil and water, were done to investigate the effect of these properties on water/oil relative permeability, residual oil saturation and oil recovery. Experiments were conducted on core material from viscous/heavy and light oil fields. History matching core flood simulator was used to study how the relative permeability curves and end point saturations were affected by different fluid properties using several correlations. Results were compared with field data and literature data. The results indicate that there is a correlation between the oil viscosity and/or brine salinity and residual oil saturation and water relative permeability end point. Increasing oil viscosity reduces the Krw@Sor and increases Sor. The remaining oil saturation from laboratory measurements might be too high due to experimental procedures, capillary end effect and early termination of the experiment, especially when using heavy/viscous oil. Similarly the Krw@Sor may be too low. The effect of wettability on the observed results is also discussed. A consistent relationship has been drawn between the fluid parameters, water/oil relative permeability and residual saturations, and a descriptor may be derived to define different flow behaviors. The results of this work will have application to producing fields and the methodologies developed could have wider application to sandstone and carbonate reservoirs worldwide.Keywords: history matching core flood simulator, oil recovery, relative permeability, residual saturations
Procedia PDF Downloads 338637 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model
Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis
Abstract:
Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry
Procedia PDF Downloads 225636 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 583