Search results for: analytical exposition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2350

Search results for: analytical exposition

100 The Pore–Scale Darcy–Brinkman–Stokes Model for the Description of Advection–Diffusion–Precipitation Using Level Set Method

Authors: Jiahui You, Kyung Jae Lee

Abstract:

Hydraulic fracturing fluid (HFF) is widely used in shale reservoir productions. HFF contains diverse chemical additives, which result in the dissolution and precipitation of minerals through multiple chemical reactions. In this study, a new pore-scale Darcy–Brinkman–Stokes (DBS) model coupled with Level Set Method (LSM) is developed to address the microscopic phenomena occurring during the iron–HFF interaction, by numerically describing mass transport, chemical reactions, and pore structure evolution. The new model is developed based on OpenFOAM, which is an open-source platform for computational fluid dynamics. Here, the DBS momentum equation is used to solve for velocity by accounting for the fluid-solid mass transfer; an advection-diffusion equation is used to compute the distribution of injected HFF and iron. The reaction–induced pore evolution is captured by applying the LSM, where the solid-liquid interface is updated by solving the level set distance function and reinitialized to a signed distance function. Then, a smoothened Heaviside function gives a smoothed solid-liquid interface over a narrow band with a fixed thickness. The stated equations are discretized by the finite volume method, while the re-initialized equation is discretized by the central difference method. Gauss linear upwind scheme is used to solve the level set distance function, and the Pressure–Implicit with Splitting of Operators (PISO) method is used to solve the momentum equation. The numerical result is compared with 1–D analytical solution of fluid-solid interface for reaction-diffusion problems. Sensitivity analysis is conducted with various Damkohler number (DaII) and Peclet number (Pe). We categorize the Fe (III) precipitation into three patterns as a function of DaII and Pe: symmetrical smoothed growth, unsymmetrical growth, and dendritic growth. Pe and DaII significantly affect the location of precipitation, which is critical in determining the injection parameters of hydraulic fracturing. When DaII<1, the precipitation uniformly occurs on the solid surface both in upstream and downstream directions. When DaII>1, the precipitation mainly occurs on the solid surface in an upstream direction. When Pe>1, Fe (II) transported deeply into and precipitated inside the pores. When Pe<1, the precipitation of Fe (III) occurs mainly on the solid surface in an upstream direction, and they are easily precipitated inside the small pore structures. The porosity–permeability relationship is subsequently presented. This pore-scale model allows high confidence in the description of Fe (II) dissolution, transport, and Fe (III) precipitation. The model shows fast convergence and requires a low computational load. The results can provide reliable guidance for injecting HFF in shale reservoirs to avoid clogging and wellbore pollution. Understanding Fe (III) precipitation, and Fe (II) release and transport behaviors give rise to a highly efficient hydraulic fracture project.

Keywords: reactive-transport , Shale, Kerogen, precipitation

Procedia PDF Downloads 144
99 Hexahydropyrimidine-2,4-Diones: Synthesis and Cytotoxic Activity

Authors: M. Koksal, T. Ozyazici, E. Gurdal, M. Yarım, E. Demirpolat, M. B. Y. Aycan

Abstract:

The discovery of new drugs in cancer chemotherapy is still a major topic because of severe side effects, selectivity problems and resistance development potential of existing drugs. In recent years, combined anticancer therapies or multi-acting drugs are clinically preferred over traditional cytotoxic treatment, with the aim of avoiding resistance and toxic side effects. Arrangement of multi-acting targets can be carried out either by combination of several drugs with different mechanisms or by usage of a single chemical compound capable of regulating several targets of a disease with multiple factors. In literature, several pyrimidine and piperazine derivatives have been involved in the structure of many compounds which have been used as chemotherapeutic agents along with wide clinical applications. The aim of this study is to combine pyrimidine and piperazine core structures to research and develop novel piperazinylpyrimidine derivatives with selective cytotoxicity over cancer cells. In this study, a group of novel 6-fluorophenyl-3-[2-(substitutedpiperazinyl)ethyl] hexahydropyrimidine-2,4-dione derivatives designed to observe the desired anticancer activity due to pyrimidine and piperazine based scaffolds. Target compounds were obtained by the reaction of appropriate piperazine derivatives and 6-(2/4-fluorophenyl)-3-(2-chloroethyl)hexahydropyrimidine-2,4-dione. The synthetic pathway of 6-(2/4-fluorophenyl)-3-(2-chloroethyl)hexahydropyrimidine-2,4-dione was started with Rodionov reaction using aldehyde, malonic acid and ammonium acetate in ethanol. Isolated β-fluorophenyl-β-amino acids were treated with 2-chloroethylisocyanate in the presence of an aqueous sodium hydroxide solution at room temperature to yield the sodium salts of the corresponding ureido acids. By addition of a mineral acid, ureido acids were precipitated. Later, these ureido acids were refluxed in thionyl chloride to give the 6-(2/4-fluorophenyl)-3-(2-chloroethyl)hexahydropyrimidine-2,4-di-one which were furthermore treated with secondary amines. Structures of purified compounds were characterized with IR, 1H-NMR, 13C-NMR, mass spectroscopies and elemental analysis. All of the compounds gave satisfactory analytical and spectroscopic data, which were in full accordance with their depicted structures. In IR spectra of the compounds, N-H group was seen at 3230-3213 cm⁻¹. C-H was seen at 3100-2820 cm⁻¹ and C=O vibrational peaks were observed approximately at 1725 and 1665 cm⁻¹ in accordance with literature. In the NMR spectra of target compounds, the methylene protons of piperazine give two separate multiplet peaks around 3.5 and 4.5 ppm representing the successful N-alkylation of the structure. The cytotoxic activity of the synthesized compounds was investigated on human bronchial epithelial (BEAS 2B), lung (A549), colon adenocarcinoma (COLO205) and breast (MCF7) cell lines, by means of sulphorhodamine B (SRB) assays in triplicate. IC₅₀ values of the screened derivatives were found in range of 11.8-78 µM. This project was supported by The Scientific and Technological Research Council of Turkey (TUBITAK, Project no: 215S157).

Keywords: cytotoxicity, hexahydropyrimidine, piperazine, sulphorhodamine B assay

Procedia PDF Downloads 130
98 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 30
97 Yu Kwang-Chung vs. Yu Kwang-Chung: Untranslatability as the Touchstone of a Poet

Authors: Min-Hua Wu

Abstract:

The untranslatability of an established poet’s tour de force is thoroughly explored by Matthew Arnold (1822-1888). In his On Translating Homer (1861), Arnold lists the four most striking poetic qualities of Homer, namely his rapidity, plainness and directness of style and diction, plainness and directness of ideas, and nobleness. He concludes that such celebrated English translators as Cowper, Pope, Chapman, and Mr. Newman are all doomed, due to their respective failure in rendering the totality of the four Homeric poetic qualities. Why poetic translation always amounts to being proven such a mission impossible for the translator? According to Arnold, it is because there constantly exists a mist interposed between the translator’s own literary self-obsession and the objective artistic qualities that reside in the work of the original author. Foregrounding such a seemingly empowering yet actually detrimental poetic mist, he explains why the aforementioned translators fail in their attempts to bring the Homeric charm to the British reader. Drawing on Arnold’s analytical study on Homeric translation, the research attempts to bring Yu Kwang-chung the poet vis-à-vis Yu Kwang-chung the translator, with an aim not so much to find any similar mist as revealed by Arnold between his Chinese poetry and English translation as to probe into a latent and veiled literary and lingual mist interposed between Chinese and English, if not between Chinese and English literatures. The major work studied and analyzed for this study is Yu’s own Chinese poetry and his own English translation collected in The Night Watchman: Yu Kwang-chung 1958-2004. The research argues that the following critical elements that characterizes Yu’s poetics are to a certain extent 'transformed,' if not 'lost,' in his English translation: a. the Chinese pictographic and ideographic unit terms which so unfailingly characterize the poet’s incredible creativity, allowing him to habitually and conveniently coin concrete textual images or word-scapes almost at his own will; b. the subtle wordplay and punning which appear at a reasonable frequency; c. the parallel contrastive repetitive syntactic structure within a single poetic line; d. the ambiguous and highly associative diction in the adjective and noun categories; e. the literary allusion that harks back to the old times of Chinese literature; f. the alliteration that adds rhythm and smoothness to the lines; g. the rhyming patterns that bring about impressive sonority and lingering echo to the ears of the reader; h. the grandeur-imposing and sublimity-arousing word-scaping which hinges on the employment of verbs; i. the meandering cultural heritage that embraces such elements as Chinese medicine and kung fu; and j. other features of the like. Once we appeal to the Arnoldian tribunal and resort to the strict standards of such a Victorian cultural and literary critic who insists 'to see the object as in itself it really is,' we may serve as a potential judge for the tug of war between Yu Kwang-chung the poet and Yu Kwang-chung the translator, a tug of war that will not merely broaden our understating of Chinese poetics but deepen our apprehension of Chinese-English translatology.

Keywords: Yu Kwang-chung, The Night Watchman, poetry translation, Chinese-English translation, translation studies, Matthew Arnold

Procedia PDF Downloads 364
96 Seismic Analysis of Vertical Expansion Hybrid Structure by Response Spectrum Method Concern with Disaster Management and Solving the Problems of Urbanization

Authors: Gautam, Gurcharan Singh, Mandeep Kaur, Yogesh Aggarwal, Sanjeev Naval

Abstract:

The present ground reality scenario of suffering of humanity shows the evidence of failure to take wrong decisions to shape the civilization with Irresponsibilities in the history. A strong positive will of right responsibilities make the right civilization structure which affects itself and the whole world. Present suffering of humanity shows and reflect the failure of past decisions taken to shape the true culture with right social structure of society, due to unplanned system of Indian civilization and its rapid disaster of population make the failure to face all kind of problems which make the society sufferer. Our India is still suffering from disaster like earthquake, floods, droughts, tsunamis etc. and we face the uncountable disaster of deaths from the beginning of humanity at the present time. In this research paper our focus is to make a Disaster Resistance Structure having the solution of dense populated urban cities area by high vertical expansion HYBRID STRUCTURE. Our efforts are to analyse the Reinforced Concrete Hybrid Structure at different seismic zones, these concrete frames were analyzed using the response spectrum method to calculate and compare the different seismic displacement and drift. Seismic analysis by this method generally is based on dynamic analysis of building. Analysis results shows that the Reinforced Concrete Building at seismic Zone V having maximum peak story shear, base shear, drift and node displacement as compare to the analytical results of Reinforced Concrete Building at seismic Zone III and Zone IV. This analysis results indicating to focus on structural drawings strictly at construction site to make a HYBRID STRUCTURE. The study case is deal with the 10 story height of a vertical expansion Hybrid frame structure at different zones i.e. zone III, zone IV and zone V having the column 0.45x0.36mt and beam 0.6x0.36mt. with total height of 30mt, to make the structure more stable bracing techniques shell be applied like mage bracing and V shape bracing. If this kind of efforts or structure drawings are followed by the builders and contractors then we save the lives during earthquake disaster at Bhuj (Gujarat State, India) on 26th January, 2001 which resulted in more than 19,000 deaths. This kind of Disaster Resistance Structure having the capabilities to solve the problems of densely populated area of cities by the utilization of area in vertical expansion hybrid structure. We request to Government of India to make new plans and implementing it to save the lives from future disasters instead of unnecessary wants of development plans like Bullet Trains.

Keywords: history, irresponsibilities, unplanned social structure, humanity, hybrid structure, response spectrum analysis, DRIFT, and NODE displacement

Procedia PDF Downloads 177
95 The Role of Metaheuristic Approaches in Engineering Problems

Authors: Ferzat Anka

Abstract:

Many types of problems can be solved using traditional analytical methods. However, these methods take a long time and cause inefficient use of resources. In particular, different approaches may be required in solving complex and global engineering problems that we frequently encounter in real life. The bigger and more complex a problem, the harder it is to solve. Such problems are called Nondeterministic Polynomial time (NP-hard) in the literature. The main reasons for recommending different metaheuristic algorithms for various problems are the use of simple concepts, the use of simple mathematical equations and structures, the use of non-derivative mechanisms, the avoidance of local optima, and their fast convergence. They are also flexible, as they can be applied to different problems without very specific modifications. Thanks to these features, it can be easily embedded even in many hardware devices. Accordingly, this approach can also be used in trend application areas such as IoT, big data, and parallel structures. Indeed, the metaheuristic approaches are algorithms that return near-optimal results for solving large-scale optimization problems. This study is focused on the new metaheuristic method that has been merged with the chaotic approach. It is based on the chaos theorem and helps relevant algorithms to improve the diversity of the population and fast convergence. This approach is based on Chimp Optimization Algorithm (ChOA), that is a recently introduced metaheuristic algorithm inspired by nature. This algorithm identified four types of chimpanzee groups: attacker, barrier, chaser, and driver, and proposed a suitable mathematical model for them based on the various intelligence and sexual motivations of chimpanzees. However, this algorithm is not more successful in the convergence rate and escaping of the local optimum trap in solving high-dimensional problems. Although it and some of its variants use some strategies to overcome these problems, it is observed that it is not sufficient. Therefore, in this study, a newly expanded variant is described. In the algorithm called Ex-ChOA, hybrid models are proposed for position updates of search agents, and a dynamic switching mechanism is provided for transition phases. This flexible structure solves the slow convergence problem of ChOA and improves its accuracy in multidimensional problems. Therefore, it tries to achieve success in solving global, complex, and constrained problems. The main contribution of this study is 1) It improves the accuracy and solves the slow convergence problem of the ChOA. 2) It proposes new hybrid movement strategy models for position updates of search agents. 3) It provides success in solving global, complex, and constrained problems. 4) It provides a dynamic switching mechanism between phases. The performance of the Ex-ChOA algorithm is analyzed on a total of 8 benchmark functions, as well as a total of 2 classical and constrained engineering problems. The proposed algorithm is compared with the ChoA, and several well-known variants (Weighted-ChoA, Enhanced-ChoA) are used. In addition, an Improved algorithm from the Grey Wolf Optimizer (I-GWO) method is chosen for comparison since the working model is similar. The obtained results depict that the proposed algorithm performs better or equivalently to the compared algorithms.

Keywords: optimization, metaheuristic, chimp optimization algorithm, engineering constrained problems

Procedia PDF Downloads 53
94 Solution Thermodynamics, Photophysical and Computational Studies of TACH2OX, a C-3 Symmetric 8-Hydroxyquinoline: Abiotic Siderophore Analogue of Enterobactin

Authors: B. K. Kanungo, Monika Thakur, Minati Baral

Abstract:

8-hydroxyquinoline, (8HQ), experiences a renaissance due to its utility as a building block in metallosupramolecular chemistry and its versatile use of its derivatives in various fields of analytical chemistry, materials science, and pharmaceutics. It forms stable complexes with a variety of metal ions. Assembly of more than one such unit to form a polydentate chelator enhances its coordinating ability and the related properties due to the chelate effect resulting in high stability constant. Keeping in view the above, a nonadentate chelator N-[3,5-bis(8-hydroxyquinoline-2-amido)cyclohexyl]-8-hydroxyquinoline-2-carboxamide, (TACH2OX), containing a central cis,cis-1,3,5-triaminocyclohexane appended to three 8-hydroxyquinoline at 2-position through amide linkage is developed, and its solution thermodynamics, photophysical and Density Functional Theory (DFT) studies were undertaken. The synthesis of TACH2OX was carried out by condensation of cis,cis-1,3,5-triaminocyclohexane, (TACH) with 8‐hydroxyquinoline‐2‐carboxylic acid. The brown colored solid has been fully characterized through melting point, infrared, nuclear magnetic resonance, electrospray ionization mass and electronic spectroscopy. In solution, TACH2OX forms protonated complexes below pH 3.4, which consecutively deprotonates to generate trinegative ion with the rise of pH. Nine protonation constants for the ligand were obtained that ranges between 2.26 to 7.28. The interaction of the chelator with two trivalent metal ion Fe3+ and Al3+ were studied in aqueous solution at 298 K. The metal-ligand formation constants (ML) obtained by potentiometric and spectrophotometric method agree with each other. The protonated and hydrolyzed species were also detected in the system. The in-silico studies of the ligand, as well as the complexes including their protonated and deprotonated species assessed by density functional theory technique, gave an accurate correlation with each observed properties such as the protonation constants, stability constants, infra-red, nmr, electronic absorption and emission spectral bands. The nature of electronic and emission spectral bands in terms of number and type were ascertained from time-dependent density functional theory study and the natural transition orbitals (NTO). The global reactivity indices parameters were used for comparison of the reactivity of the ligand and the complex molecules. The natural bonding orbital (NBO) analysis could successfully describe the structure and bonding of the metal-ligand complexes specifying the percentage of contribution in atomic orbitals in the creation of molecular orbitals. The obtained high value of metal-ligand formation constants indicates that the newly synthesized chelator is a very powerful synthetic chelator. The minimum energy molecular modeling structure of the ligand suggests that the ligand, TACH2OX, in a tripodal fashion firmly coordinates to the metal ion as hexa-coordinated chelate displaying distorted octahedral geometry by binding through three sets of N, O- donor atoms, present in each pendant arm of the central tris-cyclohexaneamine tripod.

Keywords: complexes, DFT, formation constant, TACH2OX

Procedia PDF Downloads 117
93 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 239
92 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations

Authors: Madan Chandra Maurya, A. R. Dar

Abstract:

Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.

Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS

Procedia PDF Downloads 205
91 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 278
90 Development of an Systematic Design in Evaluating Force-On-Force Security Exercise at Nuclear Power Plants

Authors: Seungsik Yu, Minho Kang

Abstract:

As the threat of terrorism to nuclear facilities is increasing globally after the attacks of September 11, we are striving to recognize the physical protection system and strengthen the emergency response system. Since 2015, Korea has implemented physical protection security exercise for nuclear facilities. The exercise should be carried out with full cooperation between the operator and response forces. Performance testing of the physical protection system should include appropriate exercises, for example, force-on-force exercises, to determine if the response forces can provide an effective and timely response to prevent sabotage. Significant deficiencies and actions taken should be reported as stipulated by the competent authority. The IAEA(International Atomic Energy Agency) is also preparing force-on-force exercise program documents to support exercise of member states. Currently, ROK(Republic of Korea) is implementing exercise on the force-on-force exercise evaluation system which is developed by itself for the nuclear power plant, and it is necessary to establish the exercise procedure considering the use of the force-on-force exercise evaluation system. The purpose of this study is to establish the work procedures of the three major organizations related to the force-on-force exercise of nuclear power plants in ROK, which conduct exercise using force-on-force exercise evaluation system. The three major organizations are composed of licensee, KINAC (Korea Institute of Nuclear Nonproliferation and Control), and the NSSC(Nuclear Safety and Security Commission). Major activities are as follows. First, the licensee establishes and conducts an exercise plan, and when recommendations are derived from the result of the exercise, it prepares and carries out a force-on-force result report including a plan for implementation of the recommendations. Other detailed tasks include consultation with surrounding units for adversary, interviews with exercise participants, support for document evaluation, and self-training to improve the familiarity of the MILES (Multiple Integrated Laser Engagement System). Second, KINAC establishes a force-on-force exercise plan review report and reviews the force-on-force exercise plan report established by licensee. KINAC evaluate force-on-force exercise using exercise evaluation system and prepare training evaluation report. Other detailed tasks include MILES training, adversary consultation, management of exercise evaluation systems, and analysis of exercise evaluation results. Finally, the NSSC decides whether or not to approve the force-on-force exercise and makes a correction request to the nuclear facility based on the exercise results. The most important part of ROK's force-on-force exercise system is the analysis through the exercise evaluation system implemented by KINAC after the exercise. The analytical method proceeds in the order of collecting data from the exercise evaluation system and analyzing the collected data. The exercise application process of the exercise evaluation system introduced in ROK in 2016 will be concretely set up, and a system will be established to provide objective and consistent conclusions between exercise sessions. Based on the conclusions drawn up, the ultimate goal is to complement the physical protection system of licensee so that the system makes licensee respond effectively and timely against sabotage or unauthorized removal of nuclear materials.

Keywords: Force-on-Force exercise, nuclear power plant, physical protection, sabotage, unauthorized removal

Procedia PDF Downloads 117
89 Compromising Quality of Life in Low Income Settlement's: The Case of Ashrayan Prakalpa, Khulna

Authors: Salma Akter, Md. Kamal Uddin

Abstract:

This study aims to demonstrate how top-down shelter policy and its resultant dwelling environment leads to ‘everyday compromise’ by the grassroots according to subjective (satisfaction) and objective (physical design elements and physical environmental elements) indicators, which are measured across three levels of the settlement; macro (Community), meso (Neighborhood or shelter/built environment) and micro (family). Ashrayan Prakalpa is a resettlement /housing project of Government of Bangladesh for providing shelters and human resources development activities like education, microcredit, and training programme to landless, homeless and rootless people. Despite the integrated nature of the shelter policies (comprises poverty alleviation, employment opportunity, secured tenure, and livelihood training), the ‘quality of life’ issue at the different levels of settlements becomes questionable. As dwellers of shelter units (although formally termed as ‘barracks’ rather shelter or housing) remain on the receiving end of government’s resettlement policies, they often involve with spatial-physical and socio-economic negotiation and assume curious forms of spatial practice, which often upholds contradiction with policy planning. Thus, policy based shelter force dwellers to persistently compromise with their provided built environments both in overtly and covertly. Compromising with prescribed designed space and facilities across living places articulated their negotiation with the quality of allocated space, built form and infrastructures, which in turn exert as less quality of life. The top-down shelter project, Dakshin Chandani Mahal Ashrayan Prakalpa at Dighalia Upazila, the study area located at the Eastern fringe area of Khulna, Bangladesh, is still in progress to resettle internally displaced and homeless people. In terms of methodology, this research is primarily exploratory and adopts a case study method, and an analytical framework is developed through the deductive approach for evaluating the quality of life. Secondary data have been obtained from housing policy analysis and relevant literature review, while key informant interview, focus group discussion, necessary drawings and photographs and participant observation across dwelling, neighborhood, and community level have also been administered as primary data collection methodology. Findings have revealed that various shortages, inadequacies, and negligence of policymakers force to compromise with allocated designed space, physical infrastructure and economic opportunities across dwelling, neighborhood and mostly community level. Thus, the outcome of this study can be beneficial for a global-level understating of the compromising the ‘quality of life’ under top-down shelter policy. Locally, for instance, in the context of Bangladesh, it can help policymakers and concerned authorities to formulate the shelter policies and take initiatives to improve the well-being of marginalized.

Keywords: Ashrayan Prakalpa, compromise, displaced people, quality of life

Procedia PDF Downloads 124
88 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter

Authors: Jiri Cecrdle

Abstract:

This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.

Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator

Procedia PDF Downloads 56
87 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry

Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.

Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development

Procedia PDF Downloads 67
86 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 239
85 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 97
84 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques

Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev

Abstract:

Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.

Keywords: data analysis, demand modeling, healthcare, medical facilities

Procedia PDF Downloads 114
83 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory

Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang

Abstract:

As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.

Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation

Procedia PDF Downloads 111
82 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers

Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec

Abstract:

Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.

Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser

Procedia PDF Downloads 294
81 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV

Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül

Abstract:

Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.

Keywords: Multiplex, FIV, FeLV, FCoV, FIP

Procedia PDF Downloads 75
80 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study

Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha

Abstract:

Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.

Keywords: strabismus, knowledge, attitude, Woreta

Procedia PDF Downloads 34
79 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges

Authors: Rafat Fazeli, Reza Fazeli

Abstract:

This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.

Keywords: the welfare state, social wage, The United States, limits to growth

Procedia PDF Downloads 187
78 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic

Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink

Abstract:

Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.

Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction

Procedia PDF Downloads 142
77 Interpretation of Time Series Groundwater Monitoring Data Using Analytical Impulse Response Function Method to Understand Groundwater Processes Along the Murray River Floodplain at Gunbower Forest, Victoria, Australia

Authors: Mark Hocking

Abstract:

There is concern about the potential impact environmental flooding may have on groundwater levels and salinity processes in the Murray-Darling Basin. A study was undertaken to determine if environmental flooding of the Gunbower Forest has an impact on groundwater level and salinity which is in Victoria, Australia. To assess the impact, Impulse Response Functions (IRFs) are applied to time series groundwater monitoring well data in the area surrounding Gunbower Forest. It is found that rainfall is the primary driver of seasonal water table fluctuation, and the Murray River water level is a secondary contributor to the water table fluctuations. The dominant process that influenced the long-term water table level and salinity conditions is associated with pressure changes in the deep regional aquifer. The study demonstrates that groundwater level fluctuations in the vicinity of Gunbower Forest do not correlate with flooding (natural or managed). Groundwater recharge is calculated by applying the bore hydrograph method to the rainfall-attributed forcing function fluctuations. Data collected from thirty-three bores between 1990 to 2020 is processed to determine a 30-year average groundwater recharge rate. A 5% specific yield of the unconfined aquifer is assumed based on previously published data. It is found that the rainfall-attributed mean annual groundwater recharge varied between 2 mm/year and 189 mm/year with a median of 33.6 mm/year. Surface water recharge is also calculated by analysing the surface water attributed forcing function fluctuations and found to be as high as 37 mm/year, with most of the high values in the vicinity of rivers or agricultural land. There is a long-term regional aquifer declining trend where most water table bores have an average falling trend of 20 cm/year independent of rainfall over the past 30 years. It is found that the groundwater level beneath the Gunbower Forest is dominated by groundwater evapotranspiration. Evapotranspiration lowers the water table by as much as 0.5 m within the forest, thereby causing a relative groundwater level depression under the Gunbower Forest. Historical data shows that groundwater salinity in the area varies and has an electrical conductivity of up to 45 000 µS/cm (comparable to seawater). High groundwater salinity occurs both within and outside the Gunbower Forest as well as adjacent to the Murray River. Available groundwater salinity data suggests trends are generally stable; however, data quality and collection frequency could be improved. This study shows that at the majority of locations analyzed, the groundwater recharge occurred due to both rainfall and water loss from the Murray River. It is found that Deep groundwater pressures determined the base groundwater level, and the fluctuation of the deeper aquifer pressures determined the environmental interaction at the water surface. Local groundwater processes, such as high evapotranspiration rates in Gunbower Forest, have the capacity to lower the water table locally. The rise or fall of the regional aquifer water level has the greatest influence on the groundwater salinity in and around Gunbower Forest.

Keywords: groundwater data interpretation, groundwater monitoring, hydrogeology, impulse response function

Procedia PDF Downloads 30
76 Eco-City Planning and Urban Design in Lagos, Nigeria: Recent Innovations, Trends, Concerns, Challenges, and Solutions

Authors: Dahunsi Michael Oluseyi

Abstract:

This paper aims to extensively examine eco-city planning and urban design in Lagos, Nigeria. It will delve into the city's developments, challenges, and potential solutions to offer insights for sustainable urban growth within the rapidly expanding urban landscape. The research will scrutinize recent innovations, emerging trends, and practical remedies to promote ecological sustainability within an urban framework. It will encompass a more in-depth review of current literature, case studies, and qualitative analyses, thereby augmenting the depth and breadth of the research. The objectives are to assess the current eco-city planning initiatives and urban design trends in Lagos, Nigeria, considering the city's unique characteristics and challenges. To identify and analyze the challenges encountered during the implementation of eco-friendly urban developments in Lagos, to explore and evaluate the innovative and practical solutions that are implemented to promote sustainability within the city, to provide comprehensive insights and actionable recommendations for policymakers, urban planners, and other stakeholders involved in sustainable urban development in Lagos, the rapid urbanization of Lagos has brought forth a myriad of challenges, including a burgeoning population, inadequate infrastructure, waste management issues, and environmental pollution. Eco-city planning has emerged as a promising approach to addressing these obstacles, striving to create urban spaces that are more habitable, resource-efficient, and environmentally friendly. This research holds substantial importance in exploring the application of eco-city planning principles within a megacity like Lagos. Analyzing recent innovations, trends, concerns, challenges, and solutions provides invaluable insights for policymakers, urban planners, and stakeholders dedicated to fostering sustainable urban development. The methodologies employed in this research are structured to embrace a multifaceted and intricate approach, aiming to facilitate a comprehensive understanding of the complexities inherent in eco-city planning and urban design in Lagos, Nigeria. This methodological framework is designed to encompass various diverse strategies and analytical tools to effectively capture the multidimensional aspects of sustainable urban development. It involves an in-depth analysis of academic publications, governmental reports, and urban planning documents to highlight global eco-city planning trends and gather Lagos-specific insights through a detailed exploration of eco-friendly initiatives and projects in Lagos to evaluate successes, challenges, and strategies for addressing environmental concerns by engaging key stakeholders, including urban planners, policymakers, environmental experts, and residents, to collect firsthand perspectives, concerns, and insights. Also, a thorough analysis will be carried out on data collected from literature reviews, case studies, interviews, and surveys used to extract prevalent patterns, challenges, and innovative solutions from diverse sources. This study aims to contribute to the discourse on sustainable urban development by offering a comprehensive analysis of eco-city planning in Lagos and providing practical recommendations for a more sustainable urban future.

Keywords: eco-friendly, innovation, sustainability, stakeholders

Procedia PDF Downloads 34
75 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 96
74 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 153
73 Brittle Fracture Tests on Steel Bridge Bearings: Application of the Potential Drop Method

Authors: Natalie Hoyer

Abstract:

Usually, steel structures are designed for the upper region of the steel toughness-temperature curve. To address the reduced toughness properties in the temperature transition range, additional safety assessments based on fracture mechanics are necessary. These assessments enable the appropriate selection of steel materials to prevent brittle fracture. In this context, recommendations were established in 2011 to regulate the appropriate selection of steel grades for bridge bearing components. However, these recommendations are no longer fully aligned with more recent insights: Designing bridge bearings and their components in accordance with DIN EN 1337 and the relevant sections of DIN EN 1993 has led to an increasing trend of using large plate thicknesses, especially for long-span bridges. However, these plate thicknesses surpass the application limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with the regulations outlined in DIN EN 1993-1-10 regarding material toughness and through-thickness properties requires some further modifications. Therefore, these standards cannot be directly applied to the material selection for bearings without additional information. In addition, recent findings indicate that certain bridge bearing components are subjected to high fatigue loads, necessitating consideration in structural design, material selection, and calculations. To address this issue, the German Center for Rail Traffic Research initiated a research project aimed at developing a proposal to enhance the existing standards. This proposal seeks to establish guidelines for the selection of steel materials for bridge bearings to prevent brittle fracture, particularly for thick plates and components exposed to specific fatigue loads. The results derived from theoretical analyses, including finite element simulations and analytical calculations, are verified through component testing on a large-scale. During these large-scale tests, where a brittle failure is deliberately induced in a bearing component, an artificially generated defect is introduced into the specimen at the predetermined hotspot. Subsequently, a dynamic load is imposed until the crack initiation process transpires, replicating realistic conditions akin to a sharp notch resembling a fatigue crack. To stop the action of the dynamic load in time, it is important to precisely determine the point at which the crack size transitions from stable crack growth to unstable crack growth. To achieve this, the potential drop measurement method is employed. The proposed paper informs about the choice of measurement method (alternating current potential drop (ACPD) or direct current potential drop (DCPD)), presents results from correlations with created FE models, and may proposes a new approach to introduce beach marks into the fracture surface within the framework of potential drop measurement.

Keywords: beach marking, bridge bearing design, brittle fracture, design for fatigue, potential drop

Procedia PDF Downloads 1
72 Sustainability in Higher Education: A Case of Transition Management from a Private University in Turkey (Ongoing Study)

Authors: Ayse Collins

Abstract:

The Agenda 2030 puts Higher Education Institutions (HEIs) in the situation where they should emphasize ways to promote sustainability accordingly. However, it is still unclear: a) how sustainability is understood, and b) which actions have been taken in both discourse and practice by HEIs regarding the three pillars of sustainability, society, environment, and economy. There are models of sustainable universities developed by different authors from different countries; For Example, The Global Reporting Initiative (GRI) methodology which offers a variety of indicators to diagnose performance. However, these models have never been developed for universities in particular. Any model, in this sense, cannot be completed adequately without defining the appropriate tools to measure, analyze and control the performance of initiatives. There is a need to conduct researches in different universities from different countries to understand where we stand in terms of sustainable higher education. Therefore, this study aims at exploring the actions taken by a university in Ankara, Turkey, since Agenda 2030 should consider localizing its objectives and targets according to a certain geography. This university just announced 2021-2022 as “Sustainability Year.” Therefore, this research is a multi-methodology longitudinal study and uses the theoretical framework of the organization and transition management (TM). It is designed to examine the activities as being strategic, tactical, operational, and reflexive in nature and covers the six main aspects: academic community, administrative staff, operations and services, teaching, research, and extension. The preliminary research will answer the role of the top university governance, perception of the stakeholders (students, instructors, administrative and support staff) regarding sustainability, and the level of achievement at the mid-evaluation and final, end of year evaluation. TM Theory is a multi-scale, multi-actor, process-oriented approach with the analytical framework to explore and promote change in social systems. Therefore, the stages and respective methodology for collecting data in this research is: Pre-development Stage: a) semi-structured interviews with university governance, c) open-ended survey with faculty, students, and administrative staff d) Semi-structured interviews with support staff, and e) analysis of current secondary data for sustainability. Take-off Stage: a) semi-structured interviews with university governance, faculty, students, administrative and support staff, b) analysis of secondary data. Breakthrough stabilization a) survey with all stakeholders at the university, b) secondary data analysis by using selected indicators for the first sustainability report for universities The findings from the predevelopment stage highlight how stakeholders, coming from different faculties, different disciplines with different identities and characteristics, face the sustainability challenge differently. Though similar sustainable development goals ((social, environmental, and economic) are set in the institution, there are differences across disciplines and among different stakeholders, which need to be considered to reach the optimum goal. It is believed that the results will help changes in HEIs organizational culture to embed sustainability values in their strategic planning, academic and managerial work by putting enough time and resources to be successful in coping with sustainability.

Keywords: higher education, sustainability, sustainability auditing, transition management

Procedia PDF Downloads 90
71 Sustainable Strategies for Managing Rural Tourism in Abyaneh Village, Isfahan

Authors: Hoda Manafian, Stephen Holland

Abstract:

Problem statement: Rural areas in Iran are one of the most popular tourism destinations. Abyaneh Village is one of them with a long history behind it (more than 1500 years) which is a national heritage site and also is nominated as a world heritage site in UNESCO tentative list from 2007. There is a considerable foundation of religious-cultural heritage and also agricultural history and activities. However, this heritage site suffers from mass tourism which is beyond its social and physical carrying capacity, since the annual number of tourists exceed 500,000. While there are four adjacent villages around Abyaneh which can benefit from advantages of tourism. Local managers also can at the same time prorate the tourists’ flux of Abyaneh on those other villages especially in high-season. The other villages have some cultural and natural tourism attractions as well. Goal: The main goal of this study is to identify a feasible development strategy according to the current strengths, weaknesses, opportunities and threats of rural tourism in this area (Abyaneh Village and four adjacent villages). This development strategy can lead to sustainable management of these destinations. Method: To this end, we used SWOT analysis as a well-established tool for conducting a situational analysis to define a sustainable development strategy. The procedures included following steps: 1) Extracting variables of SWOT chart based on interviewing tourism experts (n=13), local elites (n=17) and personal observations of researcher. 2) Ranking the extracted variables from 1-5 by 13 tourism experts in Isfahan Cultural Heritage, Handcrafts and Tourism Organization (ICHTO). 3) Assigning weights to the ranked variables using Expert Choice Software and the method of Analytical Hierarchical Process (AHP). 4) Defining the Total Weighted Score (TWS) for each part of SWOT chart. 5) Identifying the strategic position according to the TWS 6) Selecting the best development strategy based on the defined position using the Strategic Position and Action Evaluation (SPACE) matrix. 7) Assessing the Probability of Strategic Success (PSS) for the preferred strategy using relevant formulas. 8) Defining two feasible alternatives for sustainable development. Results and recommendations: Cultural heritage attractions were first-ranked variable in strength chart and also lack of sufficient amenities for one-day tourists (catering, restrooms, parking, and accommodation) was firs-ranked weakness. The strategic position was in ST (Strength-Threat) quadrant which is a maxi-mini position. According this position we would suggest ‘Competitive Strategy’ as a development strategy which means relying on strengths in order to neutralization threats. The result of Probability of Strategic Success assessment which was 0.6 shows that this strategy could be successful. The preferred approach for competitive strategy could be rebranding the market of tourism in this area. Rebranding the market can be achieved by two main alternatives which are based on the current strengths and threats: 1) Defining a ‘Heritage Corridor’ from first adjacent village to Abyaneh as a final destination. 2) Focus on ‘educational tourism’ versus mass tourism and also green tourism by developing agritourism in that corridor.

Keywords: Abyaneh village, rural tourism, SWOT analysis, sustainable strategies

Procedia PDF Downloads 355