Search results for: strahler method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11531

Search results for: strahler method

7871 Thermosonic Devulcanization of Waste Ground Rubber Tires by Quaternary Ammonium-Based Ternary Deep Eutectic Solvents and the Effect of α-Hydrogen

Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid

Abstract:

Landfills, water contamination, and toxic gas emission are a few impacts faced by the environment due to the increasing number of αof waste rubber tires (WRT). In spite of such concerning issue, only minimal efforts are taken to reclaim or recycle these wastes as their products are generally not-profitable for companies. Unlike the typical reclamation process, devulcanization is a method to selectively cleave sulfidic bonds within vulcanizates to avoid polymeric scissions that compromise elastomer’s mechanical and tensile properties. The process also produces devulcanizates that are re-processable similar to virgin rubber. Often, a devulcanizing agent is needed. In the current study, novel and sustainable ammonium chloride-based ternary deep eutectic solvents (TDES), with a different number of α-hydrogens, were utilised to devulcanize ground rubber tire (GRT) as an effort to implement green chemistry to tackle such issue. 40-mesh GRT were soaked for 1 day with different TDESs and sonicated at 37-80 kHz for 60-120 mins and heated at 100-140oC for 30-90 mins. Devulcanizates were then filtered, dried, and evaluated based on the percentage of by means of Flory-Rehner calculation and swelling index. The result shows that an increasing number of α-Hs increases the degree of devulcanization, and the value achieved was around eighty-percent, thirty percent higher than the typical industrial-autoclave method. Resulting bondages of devulcanizates were also analysed by Fourier transform infrared spectrometer (FTIR), Horikx fitting, and thermogravimetric analyser (TGA). The earlier two confirms only sulfidic scissions were experienced by GRT through the treatment, while the latter proves the absence or negligibility of carbon-chains scission.

Keywords: ammonium, sustainable, deep eutectic solvent, α-hydrogen, waste rubber tire

Procedia PDF Downloads 127
7870 Measurement of CES Production Functions Considering Energy as an Input

Authors: Donglan Zha, Jiansong Si

Abstract:

Because of its flexibility, CES attracts much interest in economic growth and programming models, and the macroeconomics or micro-macro models. This paper focuses on the development, estimating methods of CES production function considering energy as an input. We leave for future research work of relaxing the assumption of constant returns to scale, the introduction of potential input factors, and the generalization method of the optimal nested form of multi-factor production functions.

Keywords: bias of technical change, CES production function, elasticity of substitution, energy input

Procedia PDF Downloads 282
7869 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 185
7868 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk

Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise

Abstract:

The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.

Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods

Procedia PDF Downloads 407
7867 E-Learning Platform for School Kids

Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.

Abstract:

E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.

Keywords: math, education games, e-learning platform, artificial intelligence

Procedia PDF Downloads 156
7866 The Role of Financial and Non-Financial Institutions in Promoting Entrepreneurship in Micro small and Medium Enterprises

Authors: Lemuel David

Abstract:

The importance of the Micro, Small, and Medium Enterprises sector is well recognized for its legitimate contribution to the Macroeconomic objectives of the Republic of Liberia, like generation of employment, input t, exports, and enhancing entrepreneurship. Right now, Medium and Small enterprises accounts for about 99 percent of the industrial units in the country, contributing 60 percent of the manufacturing sector output and approximately one-third of the nation’s exports. The role of various financial institutions like ECO bank and Non-financial Institutions like Bearch Limited support promoting the growth of Micro, Small, and Medium Enterprises is unique. A small enterprise or entrepreneur gets many types of assistance from different institutions for varied purposes in the course of his entrepreneurial journey. This paper focuses on the factors related to financial institutional support and non-financial institutional support entrepreneurs to the growth of Medium and Small enterprises in the Republic of Liberia. The significance of this paper is to support Policy and Institutional Support for Medium and Small enterprises to know the views of entrepreneurs about financial and non-financial support systems in the Republic of Liberia. This study was carried out through a survey method, with the use of questionnaires. The population for this study consisted of all registered Medium and Small enterprises which have been registered during the years 2004-2014 in the republic of Liberia. The sampling method employed for this study was a simple random technique and determined a sample size of 400. Data for the study was collected using a standard questionnaire. The questionnaire consisted of two parts: the first part consisted of questions on the profile of the respondents. The second part covers (1) financial, promotional factors and (2) non-financial promotional factors. The results of the study are based on financial and non-financial supporting activities provided by institutions to Medium and Small enterprises. After investigation, it has been found that there is no difference in the support given by Financial Institutions and non-financial Institutions. Entrepreneurs perceived “collateral-free schemes and physical infrastructure support factors are highest contributing to entry and growth of Medium and Small enterprises.

Keywords: micro, small, and medium enterprises financial institutions, entrepreneurship

Procedia PDF Downloads 98
7865 Assessment of the Impact of Regular Pilates Exercises on Static Balance in Healthy Adult Women: Preliminary Report

Authors: Anna Słupik, Krzysztof Jaworski, Anna Mosiołek, Dariusz Białoszewski

Abstract:

Background: Maintaining the correct body balance is essential in the prevention of falls in the elderly, which is especially important for women because of postmenopausal osteoporosis and the serious consequences of falls. One of the exercise methods which is very popular among adults, and which may affect body balance in a positive way is the pilates method. The aim of the study was to evaluate the effect of regular pilates exercises on the ability to maintain body balance in static conditions in adult healthy women. Material and methods: The study group consisted of 20 healthy women attending pilates twice a week for at least 1 year. The control group consisted of 20 healthy women physically inactive. Women in the age range from 35 to 50 years old without pain in musculoskeletal system or other pain were only qualified to the groups. Body balance was assessed using MatScan VersaTek platform with Sway Analysis Module based on Matscan Clinical 6.7 software. The balance was evaluated under the following conditions: standing on both feet with eyes open, standing on both feet with eyes closed, one-leg standing (separately on the right and left foot) with eyes open. Each test lasted 30 seconds. The following parameters were calculated: estimated size of the ellipse of 95% confidence, the distance covered by the Center of Gravity (COG), the size of the maximum shift in the sagittal and frontal planes and load distribution between the left and right foot, as well as between rear- and forefoot. Results: It was found that there is significant difference between the groups in favor of the study group in the size of the confidence ellipse and maximum shifts of COG in the sagittal plane during standing on both feet, both with the eyes open and closed (p < 0.05). While standing on one leg both on the right and left leg, with eyes opened there was a significant difference in favor of the study group, in terms of the size of confidence ellipse, the size of the maximum shifts in the sagittal and in the frontal plane (p < 0.05). There were no differences between the distribution of load between the right and left foot (standing with both feet), nor between fore- and rear foot (in standing with both feet or one-leg). Conclusions: 1. Static balance in women exercising regularly by pilates method is better than in inactive women, which may in the future prevent falls and their consequences. 2. The observed differences in maintaining balance in frontal plane in one-leg standing may indicate a positive impact of pilates exercises on the ability to maintain global balance in terms of the reduced support surface. 3. Pilates method can be used as a form preventive therapy for all people who are expected to have problems with body balance in the future, for example in chronic neurological disorders or vestibular problems. 4. The results have shown that further prospective randomized research on a larger and more representative group is needed.

Keywords: balance exercises, body balance, pilates, pressure distribution, women

Procedia PDF Downloads 318
7864 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 273
7863 Assessment of Sperm Aneuploidy Using Advanced Sperm Fish Technique in Infertile Patients

Authors: Archana S., Usha Rani G., Anand Balakrishnan, Sanjana R., Solomon F., Vijayalakshmi J.

Abstract:

Background: There is evidence that male factors contribute to the infertility of up to 50% of couples, who are evaluated and treated for infertility using advanced assisted reproductive technologies. Genetic abnormalities, including sperm chromosome aneuploidy as well as structural aberrations, are one of the major causes of male infertility. Recent advances in technology expedite the evaluation of sperm aneuploidy. The purpose of the study was to de-termine the prevalence of sperm aneuploidy in infertile males and the degree of association between DNA fragmentation and sperm aneuploidy. Methods: In this study, 75 infertile men were included, and they were divided into four abnormal groups (Oligospermia, Terato-spermia, Asthenospermia and Oligoasthenoteratospermia (OAT)). Men with children who were normozoospermia served as the control group. The Fluorescence in situ hybridization (FISH) method was used to test for sperm aneuploidy, and the Sperm Chromatin Dispersion Assay (SCDA) was used to measure the fragmentation of sperm DNA. Spearman's correla-tion coefficient was used to evaluate the relationship between sperm aneuploidy and sperm DNA fragmentation along with age. P < 0.05 was regarded as significant. Results: 75 partic-ipants' ages varied from 28 to 48 years old (35.5±5.1). The percentage of spermatozoa bear-ing X and Y was determined to be statistically significant (p-value < 0.05) and was found to be 48.92% and 51.18% of CEP X X 1 – nucish (CEP XX 1) [100] and CEP Y X 1 – nucish (CEP Y X 1) [100]. When compared to the rate of DNA fragmentation, it was discovered that infertile males had a greater frequency of sperm aneuploidy. Asthenospermia and OAT groups in sex chromosomal aneuploidy were significantly correlated (p<0.05). Conclusion: Sperm FISH and SCDA assay results showed increased sperm aneuploidy frequency, and DNA fragmentation index in infertile men compared with fertile men. There is a significant relationship observed between sperm aneuploidy and DNA fragmentation in OAT patients. When evaluating male variables and idiopathic infertility, the sperm FISH screening method can be used as a valuable diagnostic tool.

Keywords: ale infertility, dfi (dna fragmentation assay) (scd-sperm chromatin dispersion).art (artificial reproductive technology), trisomy, aneuploidy, fish (fluorescence in-situ hybridization), oat (oligoasthoteratospermia)

Procedia PDF Downloads 54
7862 The Investigation of Oil Price Shocks by Using a Dynamic Stochastic General Equilibrium: The Case of Iran

Authors: Bahram Fathi, Karim Alizadeh, Azam Mohammadbagheri

Abstract:

The aim of this paper is to investigate the role of oil price shocks in explaining business cycles in Iran using a dynamic stochastic general equilibrium approach. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. The model with two shocks produces different values for volatility, but these values have the same ranking as that of the actual data for most variables. In addition, the actual data are close to the ratio of standard deviations to the output obtained from the model with two shocks. The results indicate that productivity shocks are relatively more important to business cycles than the oil shocks. The model with only a productivity shock produces the most similar figures in term of volatility magnitude to that of the actual data. Next, we use the Impulse Response Functions (IRF) to evaluate the capability of the model. The IRF shows no effect of an oil shock on the capital stocks and on labor hours, which is a feature of the model. When the log-linearized system of equations is solved numerically, investment and labor hours were not found to be functions of the oil shock. This research recommends using different techniques to compare the model’s robustness. One method by which to do this is to have all decision variables as a function of the oil shock by inducing the stationary to the model differently. Another method is to impose a bond adjustment cost. This study intends to fill that gap. To achieve this objective, we derive a DSGE model that allows for the world oil price and productivity shocks. Second, we calibrate the model to the Iran economy. Next, we compare the moments from the theoretical model with both single and multiple shocks with that obtained from the actual data to see the extent to which business cycles in Iran can be explained by total oil revenue shock. Then, we use an impulse response function to evaluate the role of world oil price shocks. Finally, I present implications of the findings and interpretations in accordance with economic theory.

Keywords: oil price, shocks, dynamic stochastic general equilibrium, Iran

Procedia PDF Downloads 438
7861 Numerical Solution of Portfolio Selecting Semi-Infinite Problem

Authors: Alina Fedossova, Jose Jorge Sierra Molina

Abstract:

SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.

Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution

Procedia PDF Downloads 309
7860 Modeling the International Economic Relations Development: The Prospects for Regional and Global Economic Integration

Authors: M. G. Shilina

Abstract:

The interstate economic interaction phenomenon is complex. ‘Economic integration’, as one of its types, can be explored through the prism of international law, the theories of the world economy, politics and international relations. The most objective study of the phenomenon requires a comprehensive multifactoral approach. In new geopolitical realities, the problems of coexistence and possible interconnection of various mechanisms of interstate economic interaction are actively discussed. Currently, the Eurasian continent states support the direction to economic integration. At the same time, the existing international economic law fragmentation in Eurasia is seen as the important problem. The Eurasian space is characterized by a various types of interstate relations: international agreements (multilateral and bilateral), and a large number of cooperation formats (from discussion platforms to organizations aimed at deep integration). For their harmonization, it is necessary to have a clear vision to the phased international economic relations regulation options. In the conditions of rapid development of international economic relations, the modeling (including prognostic) can be optimally used as the main scientific method for presenting the phenomenon. On the basis of this method, it is possible to form the current situation vision and the best options for further action. In order to determine the most objective version of the integration development, the combination of several approaches were used. The normative legal approach- the descriptive method of legal modeling- was taken as the basis for the analysis. A set of legal methods was supplemented by the international relations science prognostic methods. The key elements of the model are the international economic organizations and states' associations existing in the Eurasian space (the Eurasian Economic Union (EAEU), the European Union (EU), the Shanghai Cooperation Organization (SCO), Chinese project ‘One belt-one road’ (OBOR), the Commonwealth of Independent States (CIS), BRICS, etc.). A general term for the elements of the model is proposed - the interstate interaction mechanisms (IIM). The aim of building a model of current and future Eurasian economic integration is to show optimal options for joint economic development of the states and IIMs. The long-term goal of this development is the new economic and political space, so-called the ‘Great Eurasian Community’. The process of achievement this long-term goal consists of successive steps. Modeling the integration architecture and dividing the interaction into stages led us to the following conclusion: the SCO is able to transform Eurasia into a single economic space. Gradual implementation of the complex phased model, in which the SCO+ plays a key role, will allow building an effective economic integration for all its participants, to create an economically strong community. The model can have practical value for politicians, lawyers, economists and other participants involved in the economic integration process. A clear, systematic structure can serve as a basis for further governmental action.

Keywords: economic integration, The Eurasian Economic Union, The European Union, The Shanghai Cooperation Organization, The Silk Road Economic Belt

Procedia PDF Downloads 150
7859 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function

Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina

Abstract:

The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.

Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic

Procedia PDF Downloads 248
7858 The Influence of the Normative Gender Binary in Diversity Management: A Multi-Method Study on Gender Diversity of Diversity Management

Authors: Robin C. Ladwig

Abstract:

Diversity Management, as a substantial element of Human Resource Management, aims to secure the economic benefit that assumingly comes with a diverse workforce. Consequently, diversity managers focus on the protection of employees and securing equality measurements to assure organisational gender diversity. Gender diversity as one aspect of Diversity Management seems to adhere to gender binarism and cis-normativity. Workplaces are gendered spaces which are echoing the binary gender-normativity presented in Diversity Management, sold under the label of gender diversity. While the expectation of Diversity Management implies the inclusion of a multiplicity of marginalised groups, such as trans and gender diverse people, in current literature and practice, the reality is curated by gender binarism and cis-normativity. The qualitative multi-method research showed a lack of knowledge about trans and gender diverse matters within the profession of Diversity Management and Human Resources. The semi-structured interviews with trans and gender diverse individuals from various backgrounds and occupations in Australia exposed missing considerations of trans and gender diverse experiences in the inclusivity and gender equity of various workplaces. Even if practitioners consider trans and gender diverse matters under gender diversity, the practical execution is limited to gender binary structures and cis-normative actions as the photo-elicit questionnaire with diversity managers, human resource officers, and personnel management demonstrates. Diversity Management should approach a broader source of informed practice by extending their business focus to the knowledge of humanity studies. Humanity studies could include diversity, queer, or gender studies to increase the inclusivity of marginalised groups such as trans and gender diverse employees and people. Furthermore, the definition of gender diversity should be extended beyond the gender binary and cis-normative experience. People may lose trust in Diversity Management as a supportive ally of marginalised employees if the understanding of inclusivity is limited to a gender binary and cis-normativity value system that misrepresents the richness of gender diversity.

Keywords: cis-normativity, diversity management, gender binarism, trans and gender diversity

Procedia PDF Downloads 202
7857 Fault Prognostic and Prediction Based on the Importance Degree of Test Point

Authors: Junfeng Yan, Wenkui Hou

Abstract:

Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.

Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate

Procedia PDF Downloads 377
7856 Linguistic Analysis of Holy Scriptures: A Comparative Study of Islamic Jurisprudence and the Western Hermeneutical Tradition

Authors: Sana Ammad

Abstract:

The tradition of linguistic analysis in Islam and Christianity has developed independently of each other in lieu of the social developments specific to their historical context. However, recently increasing number of Muslim academics educated in the West have tried to apply the Western tradition of linguistic interpretation to the Qur’anic text while completely disregarding the Islamic linguistic tradition used and developed by the traditional scholars over the centuries. The aim of the paper is to outline the linguistic tools and methods used by the traditional Islamic scholars for the purpose of interpretating the Holy Qur’an and shed light on how they contribute towards a better understanding of the text compared to their Western counterparts. This paper carries out a descriptive-comparative study of the linguistic tools developed and perfected by the traditional scholars in Islam for the purpose of textual analysis of the Qur’an as they have been described in the authentic works of Usul Al Fiqh (Jurisprudence) and the principles of textual analysis employed by the Western hermeneutical tradition for the study of the Bible. First, it briefly outlines the independent historical development of the two traditions emphasizing the final normative shape that they have taken. Then it draws a comparison of the two traditions highlighting the similarities and the differences existing between them. In the end, the paper demonstrates the level of academic excellence achieved by the traditional linguistic scholars in their efforts to develop appropriate tools of textual interpretation and how these tools are more suitable for interpreting the Qur’an compared to the Western principles. Since the aim of interpreters of both the traditions is to try and attain an objective understanding of the Scriptures, the emphasis of the paper shall be to highlight how well the Islamic method of linguistic interpretation contributes to an objective understanding of the Qur’anic text. The paper concludes with the following findings: The Western hermeneutical tradition of linguistic analysis developed within the Western historical context. However, the Islamic method of linguistic analysis is much more highly developed and complex and serves better the purpose of objective understanding of the Holy text.

Keywords: Islamic jurisprudence, linguistic analysis, textual interpretation, western hermeneutics

Procedia PDF Downloads 330
7855 Effect of Print Orientation on the Mechanical Properties of Multi Jet Fusion Additively Manufactured Polyamide-12

Authors: Tyler Palma, Praveen Damasus, Michael Munther, Mehrdad Mohsenizadeh, Keivan Davami

Abstract:

The advancement of additive manufacturing, in both research and commercial realms, is highly dependent upon continuing innovations and creativity in materials and designs. Additive manufacturing shows great promise towards revolutionizing various industries, due largely to the fact that design data can be used to create complex products and components, on demand and from the raw materials, for the end user at the point of use. However, it will be critical that the material properties of additively-made parts for engineering purposes be fully understood. As it is a relatively new additive manufacturing method, the response of properties of Multi Jet Fusion (MJF) produced parts to different printing parameters has not been well studied. In this work, testing of mechanical and tribological properties MJF-printed Polyamide 12 parts was performed to determine whether printing orientation in this method results in significantly different part performances. Material properties were studied at macro- and nanoscales. Tensile tests, in combination with tribology tests including steady-state wear, were performed. Results showed a significant difference in resultant part characteristics based on whether they were printed in a vertical or horizontal orientation. Tensile performance of vertically and horizontally printed samples varied, both in ultimate strength and strain. Tribology tests showed that printing orientation has notable effects on the resulting mechanical and wear properties of tested surfaces, due largely to layer orientation and the presence of unfused fused powder grain inclusions. This research advances the understanding of how print orientation affects the mechanical properties of additively manufactured structures, and also how print orientation can be exploited in future engineering design.

Keywords: additive manufacturing, indentation, nano mechanical characterization, print orientation

Procedia PDF Downloads 137
7854 Synthesis of 5-Substituted 1H-Tetrazoles in Deep Eutectic Solvent

Authors: Swapnil A. Padvi, Dipak S. Dalal

Abstract:

The chemistry of tetrazoles has been grown tremendously in the past few years because tetrazoles are important and useful class of heterocyclic compounds which have a widespread application such as anticancer, antimicrobial, analgesics, antibacterial, antifungal, antihypertensive, and anti-allergic drugs in medicinal chemistry. Furthermore, tetrazoles have application in material sciences as explosives, rocket propellants, and in information recording systems. In addition to this, they have a wide range of application in coordination chemistry as a ligand. Deep eutectic solvents (DES) have emerged over the current decade as a novel class of green reaction media and applied in various fields of sciences because of their unique physical and chemical properties similar to the ionic liquids such as low vapor pressure, non-volatility, high thermal stability and recyclability. In addition, the reactants of DES are cheaply available, low-toxic, and biodegradable, which makes them predominantly required for large-scale applications effectively in industrial production. Herein we report the [2+3] cycloaddition reaction of organic nitriles with sodium azide affords the corresponding 5-substituted 1H-tetrazoles in six different types of choline chloride based deep eutectic solvents under mild reaction condition. Choline chloride: ZnCl2 (1:2) showed the best results for the synthesis of 5-substituted 1 H-tetrazoles. This method reduces the disadvantages such as: the use of toxic metals and expensive reagents, drastic reaction conditions and the presence of dangerous hydrazoic acid. The approach provides environment-friendly, short reaction times, good to excellent yields; safe process and simple workup make this method an attractive and useful contribution to present green organic synthesis of 5-substituted-1H-tetrazoles. All synthesized compounds were characterized by IR, 1H NMR, 13C NMR and Mass spectroscopy. DES can be recovered and reused three times with very little loss in activity.

Keywords: click chemistry, choline chloride, green chemistry, deep eutectic solvent, tetrazoles

Procedia PDF Downloads 231
7853 Proof of Concept of Video Laryngoscopy Intubation: Potential Utility in the Pre-Hospital Environment by Emergency Medical Technicians

Authors: A. Al Hajeri, M. E. Minton, B. Haskins, F. H. Cummins

Abstract:

The pre-hospital endotracheal intubation is fraught with difficulties; one solution offered has been video laryngoscopy (VL) which permits better visualization of the glottis than the standard method of direct laryngoscopy (DL). This method has resulted in a higher first attempt success rate and fewer failed intubations. However, VL has mainly been evaluated by experienced providers (experienced anesthetists), and as such the utility of this device for those whom infrequently intubate has not been thoroughly assessed. We sought to evaluate this equipment to determine whether in the hands of novice providers this equipment could prove an effective airway management adjunct. DL and two VL methods (C-Mac with distal screen/C-Mac with attached screen) were evaluated by simulating practice on a Laerdal airway management trainer manikin. Twenty Emergency Medical Technicians (basics) were recruited as novice practitioners. This group was used to eliminate bias, as these clinicians had no pre-hospital experience of intubation (although they did have basic airway skills). The following areas were assessed: Time taken to intubate, number of attempts required to successfully intubate, ease of use of equipment VL (attached screen) took on average longer for novice clinicians to successfully intubate and had a lower success rate and reported higher rating of difficulty compared to DL. However, VL (with distal screen) and DL were comparable on intubation times, success rate, gastric inflation rate and rating of difficulty by the user. This study highlights the routine use of VL by inexperienced clinicians would be of no added benefit over DL. Further studies are required to determine whether Emergency Medical Technicians (Paramedics) would benefit from this airway adjunct, and ascertain whether after initial mastery of VL (with a distal screen), lower intubation times and difficulty rating may be achievable.

Keywords: direct laryngoscopy, endotracheal intubation, pre-hospital, video laryngoscopy

Procedia PDF Downloads 410
7852 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 167
7851 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors

Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills

Abstract:

The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.

Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO

Procedia PDF Downloads 468
7850 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 297
7849 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover

Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae

Abstract:

Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.

Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling

Procedia PDF Downloads 129
7848 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole

Authors: Basavaraj R. Endigeri, S. G. Sarganachari

Abstract:

Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 453
7847 Finite Element-Based Stability Analysis of Roadside Settlements Slopes from Barpak to Yamagaun through Laprak Village of Gorkha, an Epicentral Location after the 7.8Mw 2015 Barpak, Gorkha, Nepal Earthquake

Authors: N. P. Bhandary, R. C. Tiwari, R. Yatabe

Abstract:

The research employs finite element method to evaluate the stability of roadside settlements slopes from Barpak to Yamagaon through Laprak village of Gorkha, Nepal after the 7.8Mw 2015 Barpak, Gorkha, Nepal earthquake. It includes three major villages of Gorkha, i.e., Barpak, Laprak and Yamagaun that were devastated by 2015 Gorkhas’ earthquake. The road head distance from the Barpak to Laprak and Laprak to Yamagaun are about 14 and 29km respectively. The epicentral distance of main shock of magnitude 7.8 and aftershock of magnitude 6.6 were respectively 7 and 11 kilometers (South-East) far from the Barpak village nearer to Laprak and Yamagaon. It is also believed that the epicenter of the main shock as said until now was not in the Barpak village, it was somewhere near to the Yamagaun village. The chaos that they had experienced during the earthquake in the Yamagaun was much more higher than the Barpak. In this context, we have carried out a detailed study to investigate the stability of Yamagaun settlements slope as a case study, where ground fissures, ground settlement, multiple cracks and toe failures are the most severe. In this regard, the stability issues of existing settlements and proposed road alignment, on the Yamagaon village slope are addressed, which is surrounded by many newly activated landslides. Looking at the importance of this issue, field survey is carried out to understand the behavior of ground fissures and multiple failure characteristics of the slopes. The results suggest that the Yamgaun slope in Profile 2-2, 3-3 and 4-4 are not safe enough for infrastructure development even in the normal soil slope conditions as per 2, 3 and 4 material models; however, the slope seems quite safe for at Profile 1-1 for all 4 material models. The result also indicates that the first three profiles are marginally safe for 2, 3 and 4 material models respectively. The Profile 4-4 is not safe enough for all 4 material models. Thus, Profile 4-4 needs a special care to make the slope stable.

Keywords: earthquake, finite element method, landslide, stability

Procedia PDF Downloads 348
7846 Quality Control of Distinct Cements by IR Spectroscopy: First, insights into Perspectives and Opportunities

Authors: Tobias Bader, Joerg Rickert

Abstract:

One key factor in achieving net zero emissions along the cement and concrete value chain in Europe by 2050 is the use of distinct constituents to produce improved and advanced cements. These cements will contain e.g. calcined clays, recycled concrete fines that are chemically similar as well as X-ray amorphous and therefore difficult to distinguish. This leads to enhanced requirements on the analytical methods for quality control regarding accuracy as well as reproducibility due to the more complex cement composition. With the methods currently provided for in the European standards, it will be a challenge to ensure reliable analyses of the composition of the cements. In an ongoing research project, infrared (IR) spectroscopy in combination with mathematical tools (chemometrics) is going to be evaluated as an additional analytical method with fast and low preparation effort for the characterization of silicate-based cement constituents. The resulting comprehensive database should facilitate determination of the composition of new cements. First results confirmed the applicability of near-infrared IR for the characterization of traditional silicate-based cement constituents (e.g. clinker, granulated blast furnace slag) and modern X-ray amorphous constituents (e.g. calcined clay, recycled concrete fines) as well as different sulfate species (e.g. gypsum, hemihydrate, anhydrite). A multivariant calibration model based on numerous calibration mixtures is in preparation. The final analytical concept to be developed will form the basis for establishing IR spectroscopy as a rapid analytical method for characterizing material flows of known and unknown inorganic substances according to their material properties online and offline. The underlying project was funded by the Federal Institute for Research on Building, Urban Affairs and Spatial Development on behalf of the Federal Ministry of Housing, Urban Development and Building with funds from the ‘Zukunft Bau’ research programme.

Keywords: cement, infrared spectroscopy, quality control, X-ray amorphous

Procedia PDF Downloads 40
7845 Assessment of the Groundwater Agricultural Pollution Risk: Case of the Semi-Arid Region (Batna-East Algeria)

Authors: Dib Imane, Chettah Wahid, Khedidja Abdelhamid

Abstract:

The plain of Gadaïne - Ain Yaghout, located in the wilaya of Batna (Eastern Algeria), experiences intensive human activities, particularly in agricultural practices which are accompanied by an increasing use of chemical fertilizers and manure. These activities lead to a degradation of the quality of water resources. In order to protect the quality of groundwater in this plain and formulate effective strategies to mitigate or avoid any contamination of groundwater, a risk assessment using the European method known as “COSTE Action 620” was applied to the mio-. plio-quaternary aquifer of this plain. Risk assessment requires the identification of existing dangers and their potential impact on groundwater by using a system of evaluation and weighting. In addition, it also requires the integration of the hydrogeological factors that influence the movement of contaminants by means of the intrinsic vulnerability maps of groundwater, which were produced according to the modified DRASTIC method. The overall danger on the plain ranges from very low to high. Farms containing stables, houses detached from the public sewer system, and sometimes manure piles were assigned a weighting factor expressing the highest degree of harmfulness; this created a medium to high danger index. Large areas for agricultural practice and grazing are characterized, successively, by low to very low danger. Therefore, the risks present at the study site are classified according to a range from medium to very high-risk intensity. These classes successively represent 3%, 49%, and 0.2% of the surface of the plain. Cultivated land and farms present a high to very high level of risk successively. In addition, with the exception of the salt mine, which presents a very high level of risk, the gas stations and cemeteries, as well as the railway line, represent a high level of risk.

Keywords: semi-arid, quality of water resources, risk assessment, vulnerability, contaminants

Procedia PDF Downloads 49
7844 Monitoring the Railways by Means of C-OTDR Technology

Authors: Andrey V. Timofeev

Abstract:

This paper presents development results of the method of seismoacoustic activity monitoring based on usage vibrosensitive properties of optical fibers. Analysis of Rayleigh backscattering radiation parameters changes, which take place due to microscopic seismoacoustic impacts on the optical fiber, allows to determine seismoacoustic emission sources positions and to identify their types. Results of using this approach are successful for complex monitoring of railways.

Keywords: C-OTDR systems, monitoring of railways, Rayleigh backscattering, eismoacoustic activity

Procedia PDF Downloads 395
7843 Comparison of Cyclone Design Methods for Removal of Fine Particles from Plasma Generated Syngas

Authors: Mareli Hattingh, I. Jaco Van der Walt, Frans B. Waanders

Abstract:

A waste-to-energy plasma system was designed by Necsa for commercial use to create electricity from unsorted municipal waste. Fly ash particles must be removed from the syngas stream at operating temperatures of 1000 °C and recycled back into the reactor for complete combustion. A 2D2D high efficiency cyclone separator was chosen for this purpose. During this study, two cyclone design methods were explored: The Classic Empirical Method (smaller cyclone) and the Flow Characteristics Method (larger cyclone). These designs were optimized with regard to efficiency, so as to remove at minimum 90% of the fly ash particles of average size 10 μm by 50 μm. Wood was used as feed source at a concentration of 20 g/m3 syngas. The two designs were then compared at room temperature, using Perspex test units and three feed gases of different densities, namely nitrogen, helium and air. System conditions were imitated by adapting the gas feed velocity and particle load for each gas respectively. Helium, the least dense of the three gases, would simulate higher temperatures, whereas air, the densest gas, simulates a lower temperature. The average cyclone efficiencies ranged between 94.96% and 98.37%, reaching up to 99.89% in individual runs. The lowest efficiency attained was 94.00%. Furthermore, the design of the smaller cyclone proved to be more robust, while the larger cyclone demonstrated a stronger correlation between its separation efficiency and the feed temperatures. The larger cyclone can be assumed to achieve slightly higher efficiencies at elevated temperatures. However, both design methods led to good designs. At room temperature, the difference in efficiency between the two cyclones was almost negligible. At higher temperatures, however, these general tendencies are expected to be amplified so that the difference between the two design methods will become more obvious. Though the design specifications were met for both designs, the smaller cyclone is recommended as default particle separator for the plasma system due to its robust nature.

Keywords: Cyclone, design, plasma, renewable energy, solid separation, waste processing

Procedia PDF Downloads 214
7842 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption

Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský

Abstract:

Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.

Keywords: hazardous waste, oil sludge, remediation, thermal desorption

Procedia PDF Downloads 200