Search results for: Frequency tuning range
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10094

Search results for: Frequency tuning range

8534 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone

Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay

Abstract:

Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.

Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.

Procedia PDF Downloads 137
8533 Contribution of Home Gardens to Rural Household Income in Raymond Mhlaba Local Municipality, Eastern Cape Province, South Africa

Authors: K. Alaka, A. Obi

Abstract:

Home garden has proved to be significant to rural inhabitants by providing a wide range of useful products such as fruits, vegetables and medicine. There is need for quantitative information on its benefits and contributions to rural household. The main objective of this study is to investigate contributions of home garden to income of rural households in Raymond Mhlaba Local Municipality, formerly Nkonkobe Local Municipality of Eastern Cape Province South Africa. The stratified random sampling method was applied in order to choose a sample of 160 households.The study was conducted among 80 households engaging in home gardens and 80 non- participating households in the study area. Data analysis employed descriptive statistics with the use of frequency table and one way sample T test to show actual contributions. The overall model shows that social grant has the highest contribution to total household income for both categories while income generated from home garden has the second largest share to total household income, this shows that the majority of rural households in the study area rely on social grant as their source of income. However, since most households are net food buyers, it is essential to have policies that are formulated with an understanding that household food security is not only a function of the food that farming households produce for their own consumption but more so a function of total household income. The results produced sufficient evidence that home gardens contribute significantly to income of rural household.

Keywords: food security, home gardening, household, income

Procedia PDF Downloads 217
8532 PM Air Quality of Windsor Regional Scale Transport’s Impact and Climate Change

Authors: Moustafa Osman Mohammed

Abstract:

This paper is mapping air quality model to engineering the industrial system that ultimately utilized in extensive range of energy systems, distribution resources, and end-user technologies. The model is determining long-range transport patterns contribution as area source can either traced from 48 hrs backward trajectory model or remotely described from background measurements data in those days. The trajectory model will be run within stable conditions and quite constant parameters of the atmospheric pressure at the most time of the year. Air parcel trajectory is necessary for estimating the long-range transport of pollutants and other chemical species. It provides a better understanding of airflow patterns. Since a large amount of meteorological data and a great number of calculations are required to drive trajectory, it will be very useful to apply HYPSLIT model to locate areas and boundaries influence air quality at regional location of Windsor. 2–days backward trajectories model at high and low concentration measurements below and upward the benchmark which was areas influence air quality measurement levels. The benchmark level will be considered as 30 (μg/m3) as the moderate level for Ontario region. Thereby, air quality model is incorporating a midpoint concept between biotic and abiotic components to broaden the scope of quantification impact. The later outcomes’ theories of environmental obligation suggest either a recommendation or a decision of what is a legislative should be achieved in mitigation measures of air emission impact ultimately.

Keywords: air quality, management systems, environmental impact assessment, industrial ecology, climate change

Procedia PDF Downloads 238
8531 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump

Authors: Ravi Verma

Abstract:

Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.

Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity

Procedia PDF Downloads 84
8530 Compact 3-D Co-Planar Waveguide Fed Dual-Port Ultrawideband-Multiple-Input and Multiple-Output Antenna with WLAN Band-Notched Characteristics

Authors: Asim Quddus

Abstract:

A miniaturized three dimensional co-planar waveguide (CPW) two-port MIMO antenna, exhibiting high isolation and WLAN band-notched characteristics is presented in this paper for ultrawideband (UWB) communication applications. The microstrip patch antenna operates as a single UWB antenna element. The proposed design is a cuboid-shaped structure having compact size of 35 x 27 x 45 mm³. Radiating as well as decoupling structure is placed around cuboidal polystyrene sheet. The radiators are 27 mm apart, placed Face-to-Face in vertical direction. Decoupling structure is placed on the side walls of polystyrene. The proposed antenna consists of an oval shaped radiating patch. A rectangular structure with fillet edges is placed on ground plan to enhance the bandwidth. The proposed antenna exhibits a good impedance match (S11 ≤ -10 dB) over frequency band of 2 GHz – 10.6 GHz. A circular slotted structure is employed as a decoupling structure on substrate, and it is placed on the side walls of polystyrene to enhance the isolation between antenna elements. Moreover, to achieve immunity from WLAN band distortion, a modified, inverted crescent shaped slotted structure is etched on radiating patches to achieve band-rejection characteristics at WLAN frequency band 4.8 GHz – 5.2 GHz. The suggested decoupling structure provides isolation better than 15 dB over the desired UWB spectrum. The envelope correlation coefficient (ECC) and gain for the MIMO antenna are analyzed as well. Finite Element Method (FEM) simulations are carried out in Ansys High Frequency Structural Simulator (HFSS) for the proposed design. The antenna is realized on a Rogers RT/duroid 5880 with thickness 1 mm, relative permittivity ɛr = 2.2. The proposed antenna achieves a stable omni-directional radiation patterns as well, while providing rejection at desired WLAN band. The S-parameters as well as MIMO parameters like ECC are analyzed and the results show conclusively that the design is suitable for portable MIMO-UWB applications.

Keywords: 3-D antenna, band-notch, MIMO, UWB

Procedia PDF Downloads 293
8529 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency

Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami

Abstract:

Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.

Keywords: clustering, unsupervised learning, pattern recognition, categorical datasets, knowledge discovery, k-means

Procedia PDF Downloads 251
8528 Ambient Vibration Testing of Existing Buildings in Madinah

Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail

Abstract:

The elastic period has a primary role in the seismic assessment of buildings. Reliable calculations and/or estimates of the fundamental frequency of a building and its site are essential during analysis and design process. Various code formulas based on empirical data are generally used to estimate the fundamental frequency of a structure. For existing structures, in addition to code formulas and available analytical tools such as modal analyses, various methods of testing including ambient and forced vibration testing procedures may be used to determine dynamic characteristics. In this study, the dynamic properties of the 32 buildings located in the Madinah of Saudi Arabia were identified using ambient motions recorded at several, spatially-distributed locations within each building. Ambient vibration measurements of buildings have been analyzed and the fundamental longitudinal and transverse periods for all tested buildings are presented. The fundamental mode of vibration has been compared in plots with codes formulae (Saudi Building Code, EC8, and UBC1997). The results indicate that measured periods of existing buildings are shorter than that given by most empirical code formulas. Recommendations are given based on the common design and construction practice in Madinah city.

Keywords: ambient vibration, fundamental period, RC buildings, infill walls

Procedia PDF Downloads 256
8527 Optical Emission Studies of Laser Produced Lead Plasma: Measurements of Transition Probabilities of the 6P7S → 6P2 Transitions Array

Authors: Javed Iqbal, R. Ahmed, M. A. Baig

Abstract:

We present new data on the optical emission spectra of the laser produced lead plasma using a pulsed Nd:YAG laser at 1064 nm (pulse energy 400 mJ, pulse width 5 ns, 10 Hz repetition rate) in conjunction with a set of miniature spectrometers covering the spectral range from 200 nm to 720 nm. Well resolved structure due to the 6p7s → 6p2 transition array of neutral lead and a few multiplets of singly ionized lead have been observed. The electron temperatures have been calculated in the range (9000 - 10800) ± 500 K using four methods; two line ratio, Boltzmann plot, Saha-Boltzmann plot and Morrata method whereas, the electron number densities have been determined in the range (2.0 – 8.0) ± 0.6 ×1016 cm-3 using the Stark broadened line profiles of neutral lead lines, singly ionized lead lines and hydrogen Hα-line. Full width at half maximum (FWHM) of a number of neutral and singly ionized lead lines have been extracted by the Lorentzian fit to the experimentally observed line profiles. Furthermore, branching fractions have been deduced for eleven lines of the 6p7s → 6p2 transition array in lead whereas the absolute values of the transition probabilities have been calculated by combining the experimental branching fractions with the life times of the excited levels The new results are compared with the existing data showing a good agreement.

Keywords: LIBS, plasma parameters, transition probabilities, branching fractions, stark width

Procedia PDF Downloads 277
8526 Evaluation of Promoter Hypermethylation in Tissue and Blood of Non-Small Cell Lung Cancer Patients and Association with Survival

Authors: Ashraf Ali, Kriti Upadhyay, Puja Sohal, Anant Mohan, Randeep Guleria

Abstract:

Background: Gene silencing by aberrant promoter hypermethylation is common in lung cancer and is an initiating event in its development. Aim: To evaluate the gene promoter hypermethylation frequency in serum and tissue of lung cancer patients. Method: 95 newly diagnosed untreated advance stage lung cancer patients and 50 cancer free matched controls were studied. Bisulfite modification of tissue and serum DNA was done; modified DNA was used as a template for methylation-specific PCR analysis. Survival was assessed for one year. Results: Of 95 patients, 82% were non-small cell lung cancer (34% squamous cell carcinoma, 34% non-small cell lung cancer and 14% adenocarcinoma) and 18% were small cell lung cancer. Biopsy revealed that tissue of 89% and 75% of lung cancer patients and 85% and 52% of controls had promoter hypermethylated for MGMT (p=0.35) and p16(p<0.001) gene, respectively. In serum, 33% and 49% of lung cancer patients and 28% and 43% controls were positive for MGMT and p16 gene. No significant correlation was found between survival and clinico-pathological parameters. Conclusion: High gene promoter methylation frequency of p16 gene in tissue biopsy may be linked with early stages of carcinogenesis. Appropriate follow-up is required for confirmation of this finding.

Keywords: lung cancer, MS- PCR, methylation, molecular biology

Procedia PDF Downloads 188
8525 A Retrospective Study on the Age of Onset for Type 2 Diabetes Diagnosis

Authors: Mohamed A. Hammad, Dzul Azri Mohamed Noor, Syed Azhar Syed Sulaiman, Majed Ahmed Al-Mansoub, Muhammad Qamar

Abstract:

There is a progressive increase in the prevalence of early onset Type 2 diabetes mellitus. Early detection of Type 2 diabetes enhances the length and/or quality of life which might result from a reduction in the severity, frequency or prevent or delay of its long-term complications. The study aims to determine the onset age for the first diagnosis of Type 2 diabetes mellitus. A retrospective study conducted in the endocrine clinic at Hospital Pulau Pinang in Penang, Malaysia, January- December 2016. Records of 519 patients with Type 2 diabetes mellitus were screened to collect demographic data and determine the age of first-time diabetes mellitus diagnosis. Patients classified according to the age of diagnosis, gender, and ethnicity. The study included 519 patients with age (55.6±13.7) years, female 265 (51.1%) and male 254 (48.9%). The ethnicity distribution was Malay 191 (36.8%), Chinese 189 (36.4%) and Indian 139 (26.8%). The age of Type 2 diabetes diagnosis was (42±14.8) years. The female onset of diabetes mellitus was at age (41.5±13.7) years, while male (42.6±13.7) years. Distribution of diabetic onset by ethnicity was Malay at age (40.7±13.7) years, Chinese (43.2±13.7) years and Indian (42.3±13.7) years. Diabetic onset was classified by age as follow; ≤20 years’ cohort was 33 (6.4%) cases. Group >20- ≤40 years was 190 (36.6%) patients, and category >40- ≤60 years was 270 (52%) subjects. On the other hand, the group >60 years was 22 (4.2%) patients. The range of diagnosis was between 10 and 73 years old. Conclusion: Malay and female have an earlier onset of diabetes than Indian, Chinese and male. More than half of the patients had diabetes between 40 and 60 years old. Diabetes mellitus is becoming more common in younger age <40 years. The age at diagnosis of Type 2 diabetes mellitus has decreased with time.

Keywords: age of onset, diabetes diagnosis, diabetes mellitus, Malaysia, outpatients, type 2 diabetes, retrospective study

Procedia PDF Downloads 406
8524 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 302
8523 Liquid Chromatographic Determination of Alprazolam with ACE Inhibitors in Bulk, Respective Pharmaceutical Products and Human Serum

Authors: Saeeda Nadir Ali, Najma Sultana, Muhammad Saeed Arayne, Amtul Qayoom

Abstract:

Present study describes a simple and a fast liquid chromatographic method using ultraviolet detector for simultaneous determination of anxiety relief medicine alprazolam with ACE inhibitors i.e; lisinopril, captopril and enalapril employing purospher star C18 (25 cm, 0.46 cm, 5 µm). Separation was achieved within 5 min at ambient temperature via methanol: water (8:2 v/v) with pH adjusted to 2.9, monitoring the detector response at 220 nm. Optimum parameters were set up as per ICH (2006) guidelines. Calibration range was found out to be 0.312-10 µg mL-1 for alprazolam and 0.625-20 µg mL-1 for all the ACE inhibitors with correlation coefficients > 0.998 and detection limits 85, 37, 68 and 32 ng mL-1 for lisinopril, captopril, enalapril and alprazolam respectively. Intra-day, inter-day precision and accuracy of the assay were in acceptable range of 0.05-1.62% RSD and 98.85-100.76% recovery. Method was determined to be robust and effectively useful for the estimation of studied drugs in dosage formulations and human serum without obstruction of excipients or serum components.

Keywords: alprazolam, ACE inhibitors, RP HPLC, serum

Procedia PDF Downloads 507
8522 Effect of Building Construction Sizes on Project Delivery Methods in Nigeria

Authors: Nuruddeen Usman, Mohammad Sani

Abstract:

The performance of project delivery methods has been an issue of concern to various stakeholders in the construction industry. The contracting system of project delivery is the traditional system used in the delivery of most public projects in Nigeria. The direct labor system is used most times as an alternative to the traditional system. There were so many complain about the performance of contracting system and the suitability of direct labor as an alternative to the delivery of public projects. Therefore, this paper is aimed at investigating the effect of project size on the project delivery methods in the completed public buildings. Questionnaires were self-administered to managerial staff in the study area and analyzed using descriptive statistics. The findings reveals that contracting system was choosing for large size building construction project delivery with higher frequency (F) of 40 (76.9%) against direct labor with 12 (23.1%). While the small size project, the result revealed a frequency (F) of 26 (50%) for contracting system and direct labor system respectively. Base on the research findings, the contracting system, was recommended for all sizes of building construction project delivery while direct labor system can only use as an alternative for small size building construction projects delivery.

Keywords: construction size, contracting system, direct labour, effect

Procedia PDF Downloads 449
8521 Chromium-Leaching Study of Cements in Various Environments

Authors: Adriana Estokova, Lenka Palascakova, Martina Kovalcikova

Abstract:

Cement is a basic material used for building construction. Chromium as an indelible non-volatile trace element of raw materials occurs in cement clinker in the trivalent or hexavalent form. Hexavalent form of chromium is harmful and allergenic having very high water solubility and thus can easily come into contact with the human skin. The paper is aimed at analyzing the content of total chromium in Portland cements and leaching rate of hexavalent chromium in various leachants: Deionized water, Britton-Robinson buffer, used to simulate the natural environment, and hydrochloric acid (HCl). The concentration of total chromium in Portland cement samples was in a range from 173.2 to 218.5 mg/kg. The content of dissolved hexavalent chromium ranged 0.23-3.19, 2.0-5.78 and 8.88-16.25 mg/kg in deionized water, Britton-Robinson solution and hydrochloric acid, respectively. The calculated leachable fraction of Cr(VI) from cement samples was observed in the range 0.1--7.58 %.

Keywords: environment, cement, chromium, leaching

Procedia PDF Downloads 267
8520 The Impact of Technology on Media Content Regulation

Authors: Eugene Mashapa

Abstract:

The age of information has witnessed countless unprecedented technological developments, which signal the articulation of succinct technological capabilities that can match these cutting-edge technological trends. These changes have impacted patterns in the production, distribution, and consumption of media content, a space that the Film and Publication Board (FPB) is concerned with. Consequently, the FPB is keen to understand the nature and impact of these technological changes on media content regulation. This exploratory study sought to investigate how content regulators in high and middle-income economies have adapted to the changes in this space, seeking insights into innovations, technological and operational, that facilitate continued relevance during this fast-changing environment. The study is aimed at developing recommendations that could assist and inform the organisation in regulating media content as it evolves. Thus, the overall research strategy in this analysis is applied research, and the analytical model adopted is a mixed research design guided by both qualitative and quantitative research instruments. It was revealed in the study that the FPB was significantly impacted by the unprecedented technological advancements in the media regulation space. Additionally, there exists a need for the FPB to understand the current and future penetrations of 4IR technology in the industry and its impact on media governance and policy implementation. This will range from reskilling officials to align with the technological skills to developing technological innovations as well as adopting co-regulatory or self-regulatory arrangements together with content distributors, where more content is distributed in higher volumes and with increased frequency. Importantly, initiating an interactive learning process for both FPB employees and the general public can assist the regulator and improve FPB’s operational efficiency and effectiveness.

Keywords: media, regulation, technology, film and publications board

Procedia PDF Downloads 97
8519 Normal Coordinate Analysis, Molecular Structure, Vibrational, Electronic Spectra, and NMR Investigation of 4-Amino-3-Phenyl-1H-1,2,4-Triazole-5(4H)-Thione by Ab Initio HF and DFT Method

Authors: Khaled Bahgat

Abstract:

In the present work, the characterization of 4-Amino-3-phenyl-1H-1,2,4-triazole-5(4H)-thione (APTT) molecule was carried out by quantum chemical method and vibrational spectral techniques. The FT-IR (4000–400 cm_1) and FT-Raman (4000–100 cm_1) spectra of APTT were recorded in solid phase. The UV–Vis absorption spectrum of the APTT was recorded in the range of 200–400 nm. The molecular geometry, harmonic vibrational frequencies and bonding features of APTT in the ground state have been calculated by HF and DFT methods using 6-311++G(d,p) basis set. The complete vibrational frequency assignments were made by normal coordinate analysis (NCA) following the scaled quantum mechanical force field methodology (SQMF). The molecular stability and bond strength were investigated by applying the natural bond orbital analysis (NBO) and natural localized molecular orbital (NLMO) analysis. The electronic properties, such as excitation energies, absorption wavelength, HOMO and LUMO energies were performed by time depended DFT (TD-DFT) approach. The 1H and 13C nuclear magnetic resonance chemical shift of the molecule were calculated using the gauge-including atomic orbital (GIAO) method and compared with experimental results. Finally, the calculation results were analyzed to simulate infrared, FT-Raman and UV spectra of the title compound which shows better agreement with observed spectra.

Keywords: 4-amino-3-phenyl-1H-1, 2, 4-triazole-5(4H)-thione, vibrational assignments, normal coordinate analysis, quantum mechanical calculations

Procedia PDF Downloads 464
8518 Effect of Temperature and CuO Nanoparticle Concentration on Thermal Conductivity and Viscosity of a Phase Change Material

Authors: V. Bastian Aguila, C. Diego Vasco, P. Paula Galvez, R. Paula Zapata

Abstract:

The main results of an experimental study of the effect of temperature and nanoparticle concentration on thermal conductivity and viscosity of a nanofluid are shown. The nanofluid was made by using octadecane as a base fluid and CuO spherical nanoparticles of 75 nm (MkNano). Since the base fluid is a phase change material (PCM) to be used in thermal storage applications, the engineered nanofluid is referred as nanoPCM. Three nanoPCM were prepared through the two-step method (2.5, 5.0 and 10.0%wv). In order to increase the stability of the nanoPCM, the surface of the CuO nanoparticles was modified with sodium oleate, and it was verified by IR analysis. The modified CuO nanoparticles were dispersed by using an ultrasonic horn (Hielscher UP50H) during one hour (amplitude of 180 μm at 50 W). The thermal conductivity was measured by using a thermal properties analyzer (KD2-Pro) in the temperature range of 30ºC to 40ºC. The viscosity was measured by using a Brookfield DV2T-LV viscosimeter to 30 RPM in the temperature range of 30ºC to 55ºC. The obtained results for the nanoPCM showed that thermal conductivity is almost constant in the analyzed temperature range, and the viscosity decreases non-linearly with temperature. Respect to the effect of the nanoparticle concentration, both thermal conductivity and viscosity increased with nanoparticle concentration. The thermal conductivity raised up to 9% respect to the base fluid, and the viscosity increases up to 60%, in both cases for the higher concentration. Finally, the viscosity measurements for different rotation speeds (30 RPM - 80 RPM) exhibited that the addition of nanoparticles modifies the rheological behavior of the base fluid, from a Newtonian to a viscoplastic (Bingham) or shear thinning (power-law) non-Newtonian behavior.

Keywords: NanoPCM, thermal conductivity, viscosity, non-Newtonian fluid

Procedia PDF Downloads 413
8517 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 295
8516 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 86
8515 Physical Properties of Alkali Resistant-Glass Fibers in Continuous Fiber Spinning Conditions

Authors: Ji-Sun Lee, Soong-Keun Hyun, Jin-Ho Kim

Abstract:

In this study, a glass fiber is fabricated using a continuous spinning process from alkali resistant (AR) glass with 4 wt% zirconia. In order to confirm the melting properties of the marble glass, the raw material is placed into a Pt crucible and melted at 1650 ℃ for 2 h, and then annealed. In order to confirm the transparency of the clear marble glass, the visible transmittance is measured, and the fiber spinning condition is investigated by using high temperature viscosity measurements. A change in the diameter is observed according to the winding speed in the range of 100–900 rpm; it is also verified as a function of the fiberizing temperature in the range of 1200–1260 ℃. The optimum winding speed and spinning temperature are 500 rpm and 1240 ℃, respectively. The properties of the prepared spinning fiber are confirmed using optical microscope, tensile strength, modulus, and alkali-resistant tests.

Keywords: glass composition, fiber diameter, continuous filament fiber, continuous spinning, physical properties

Procedia PDF Downloads 312
8514 Proxisch: An Optimization Approach of Large-Scale Unstable Proxy Servers Scheduling

Authors: Xiaoming Jiang, Jinqiao Shi, Qingfeng Tan, Wentao Zhang, Xuebin Wang, Muqian Chen

Abstract:

Nowadays, big companies such as Google, Microsoft, which have adequate proxy servers, have perfectly implemented their web crawlers for a certain website in parallel. But due to lack of expensive proxy servers, it is still a puzzle for researchers to crawl large amounts of information from a single website in parallel. In this case, it is a good choice for researchers to use free public proxy servers which are crawled from the Internet. In order to improve efficiency of web crawler, the following two issues should be considered primarily: (1) Tasks may fail owing to the instability of free proxy servers; (2) A proxy server will be blocked if it visits a single website frequently. In this paper, we propose Proxisch, an optimization approach of large-scale unstable proxy servers scheduling, which allow anyone with extremely low cost to run a web crawler efficiently. Proxisch is designed to work efficiently by making maximum use of reliable proxy servers. To solve second problem, it establishes a frequency control mechanism which can ensure the visiting frequency of any chosen proxy server below the website’s limit. The results show that our approach performs better than the other scheduling algorithms.

Keywords: proxy server, priority queue, optimization algorithm, distributed web crawling

Procedia PDF Downloads 205
8513 The Use of PD and Tanδ Characteristics as Diagnostic Technique for the Insulation Integrity of XLPE Insulated Cable Joints

Authors: Mazen Al-Bulaihed, Nissar Wani, Abdulrahman Al-Arainy, Yasin Khan

Abstract:

Partial Discharge (PD) measurements are widely used for diagnostic purposes in electrical equipment used in power systems. The main cause of these measurements is to prevent large power failures as cables are prone to aging, which usually results in embrittlement, cracking and eventual failure of the insulating and sheathing materials, exposing the conductor and risking a potential short circuit, a likely cause of the electrical fire. Many distribution networks rely heavily on medium voltage (MV) power cables. The presence of joints in these networks is a vital part of serving the consumer demand for electricity continuously. Such measurements become even more important when the extent of dependence increases. Moreover, it is known that the partial discharge in joints and termination are difficult to track and are the most crucial point of failures in large power systems. This paper discusses the diagnostic techniques of four samples of XLPE insulated cable joints, each included with a different type of defect. Experiments were carried out by measuring PD and tanδ at very low frequency applied high voltage. The results show the importance of combining PD and tanδ for effective cable assessment.

Keywords: partial discharge, tan delta, very low frequency, XLPE cable

Procedia PDF Downloads 152
8512 Effects of Viscous and Pressure Forces in Vortex and Wake Induced Vibrations

Authors: Ravi Chaithanya Mysa, Abouzar Kaboudian, Boo Cheong Khoo, Rajeev Kumar Jaiman

Abstract:

Cross-flow vortex-induced vibrations of a circular cylinder are compared with the wake-induced oscillations of the downstream cylinder of a tandem cylinder arrangement. It is known that the synchronization of the frequency of vortex shedding with the natural frequency of the structure leads to large amplitude motions. In the case of tandem cylinders, the large amplitudes of the downstream cylinder found are compared to single cylinder setup. In this work, in the tandem arrangement, the upstream cylinder is fixed and the downstream cylinder is free to oscillate in transverse direction. We show that the wake from the upstream cylinder interacts with the downstream cylinder which influences the response of the coupled system. Extensive numerical experiments have been performed on single cylinder as well as tandem cylinder arrangements in cross-flow. Here, the wake interactions in connection to the forces generated are systematically studied. The ratio of the viscous loads to the pressure loads is found to play a major role in the displacement response of the single and tandem cylinder arrangements, as the viscous forces dissipate the energy.

Keywords: circular cylinder, vortex-shedding, VIV, wake-induced, vibrations

Procedia PDF Downloads 360
8511 Preparation of Carbon Nanofiber Reinforced HDPE Using Dialkylimidazolium as a Dispersing Agent: Effect on Thermal and Rheological Properties

Authors: J. Samuel, S. Al-Enezi, A. Al-Banna

Abstract:

High-density polyethylene reinforced with carbon nanofibers (HDPE/CNF) have been prepared via melt processing using dialkylimidazolium tetrafluoroborate (ionic liquid) as a dispersion agent. The prepared samples were characterized by thermogravimetric (TGA) and differential scanning calorimetric (DSC) analyses. The samples blended with imidazolium ionic liquid exhibit higher thermal stability. DSC analysis showed clear miscibility of ionic liquid in the HDPE matrix and showed single endothermic peak. The melt rheological analysis of HDPE/CNF composites was performed using an oscillatory rheometer. The influence of CNF and ionic liquid concentration (ranging from 0, 0.5, and 1 wt%) on the viscoelastic parameters was investigated at 200 °C with an angular frequency range of 0.1 to 100 rad/s. The rheological analysis shows the shear-thinning behavior for the composites. An improvement in the viscoelastic properties was observed as the nanofiber concentration increases. The progress in the modulus values was attributed to the structural rigidity imparted by the high aspect ratio CNF. The modulus values and complex viscosity of the composites increased significantly at low frequencies. Composites blended with ionic liquid exhibit slightly lower values of complex viscosity and modulus over the corresponding HDPE/CNF compositions. Therefore, reduction in melt viscosity is an additional benefit for polymer composite processing as a result of wetting effect by polymer-ionic liquid combinations.

Keywords: high-density polyethylene, carbon nanofibers, ionic liquid, complex viscosity

Procedia PDF Downloads 117
8510 The Effect of Extremely Low Frequency Magnetic Field on Rats Brain

Authors: Omar Abdalla, Abdelfatah Ahmed, Ahmed Mustafa, Abdelazem Eldouma

Abstract:

The purpose of this study is evaluating the effect of extremely low frequency magnetic field on Waster rats brain. The number of rats used in this study were 25, which were divided into five groups, each group containing five rats as follows: Group 1: The control group which was not exposed to energized field; Group 2: Rats were exposed to a magnetic field with an intensity of 0.6 mT (2 hours/day); Group 3: Rats were exposed to a magnetic field of 1.2 mT (2 hours/day); Group4: Rats were exposed to a magnetic field of 1.8 mT (2 hours/day); Group 5: Rats were exposed to a magnetic field of 2.4 mT (2 hours/day) and all groups were exposed for seven days, by designing a maze and calculating the time average for arriving to the decoy at special conditions. We found the time average before exposure for the all groups was G2=330 s, G3=172 s, G4=500 s and G5=174 s, respectively. We exposed all groups to ELF-MF and measured the time and we found: G2=465 s, G3=388 s, G4=501 s, and G5=442 s. It was observed that the time average increased directly with field strength. Histological samples of frontal lop of brain for all groups were taken and we found lesion, atrophy, empty vacuoles and disorder choroid plexus at frontal lope of brain. And finally we observed the disorder of choroid plexus in histological results and Alzheimer's symptoms increase when the magnetic field increases.

Keywords: nonionizing radiation, biophysics, magnetic field, shrinkage

Procedia PDF Downloads 537
8509 Cellular Automata Modelling of Titanium Alloy

Authors: Jyoti Jha, Asim Tewari, Sushil Mishra

Abstract:

The alpha-beta Titanium alloy (Ti-6Al-4V) is the most common alloy in the aerospace industry. The hot workability of Ti–6Al–4V has been investigated by means of hot compression tests carried out in the 750–950 °C temperature range and 0.001–10s-1 strain rate range. Stress-strain plot obtained from the Gleeble 3800 test results show the dynamic recrystallization at temperature 950 °C. The effect of microstructural characteristics of the deformed specimens have been studied and correlated with the test temperature, total strain and strain rate. Finite element analysis in DEFORM 2D has been carried out to see the effect of flow stress parameters in different zones of deformed sample. Dynamic recrystallization simulation based on Cellular automata has been done in DEFORM 2D to simulate the effect of hardening and recovery during DRX. Simulated results well predict the grain growth and DRX in the deformed sample.

Keywords: compression test, Cellular automata, DEFORM , DRX

Procedia PDF Downloads 297
8508 Assessing the Impacts of Long-Range Forest Fire Emission Transport on Air Quality in Toronto, Ontario, Using MODIS Fire Data and HYSPLIT Trajectories

Authors: Bartosz Osiecki, Jane Liu

Abstract:

Pollutants emitted from forest fires such as PM₂.₅ and carbon monoxide (CO) have been found to impact the air quality of distant regions through long-range transport. PM₂.₅ is of particular concern due to its transport capacity and implications for human respiratory and cardiovascular health. As such, significant increases in PM₂.₅ concentrations have been exhibited in urban areas downwind of fire sources. This study seeks to expand on this literature by evaluating the impacts of long-range forest fire emission transport on air quality in Toronto, Ontario, as a means of evaluating the vulnerability of this major urban center to distant fire events. In order to draw correlations between the fire event and air pollution episode in Toronto, MODIS fire count data and HYPLSIT trajectories are used to assess the date, location, and severity of the fire and track the trajectory of emissions (respectively). Forward and back-trajectories are run, terminating at the West Toronto air monitoring station. PM₂.₅ and CO concentrations in Toronto during September 2017 are found to be significantly elevated, which is likely attributable to the fire activity. Other sites in Ontario including Toronto (East, North, Downtown), Mississauga, Brampton, and Hamilton (Downtown) exhibit similar peaks in PM₂.₅ concentrations. This work sheds light on the non-local, natural factors influencing air quality in urban areas. This is especially important in the context of climate change which is expected to exacerbate intense forest fire events in the future.

Keywords: air quality, forest fires, PM₂.₅, Toronto

Procedia PDF Downloads 123
8507 Frequency of Gastrointestinal Manifestations in Systemic Sclerosis and Impact of Rituximab Treatment

Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova

Abstract:

Objectives. Gastrointestinal involvement is one of the most common manifestations of systemic sclerosis (SSc). The aim of our study was to assess the frequency of gastrointestinal manifestations in SSc patients (pts) with interstitial lung disease (ILD) and their changes to rituximab (RTX) therapy. Methods. There were 103 pts with SSc in this study. The mean follow-up period was 12.6±10.7 months. The mean age was 47±12.9 years, females - 87 pts (84%), and the diffuse cutaneous subset of the disease 55 pts (53%). The mean disease duration was 6.2±5.5 years. All pts had ILD and were positive for ANA. 67% of them were positive for anti-topoisomerase-1. All patients received prednisolone at a dose of 11.3±4.5 mg/day, and immunosuppressants at inclusion received 47% of them. Pts received RTX due to the ineffectiveness of previous therapy for ILD. The cumulative mean dose of RTX was 1.7±0.6 grams. 90% of pts received omeprazole at a dose of 20-40 mg/day. Results. At inclusion, dysphagia was observed in 76 pts (74%), early satiety or vomiting in 32 pts (31%), and diarrhea in 20 pts (19%). We didn't observe any changes in gastrointestinal manifestation during RTX therapy. There was a decrease in the number of pts with dysphagia from 76 (74%) to 66 (64%), but it was insignificant. The number of pts with early satiety or vomiting and diarrhea didn't change. Conclusion. In our study, gastrointestinal involvement was observed in most of the pts with SSc-ILD. We didn't find any significant changes in gastrointestinal manifestations during RTX therapy. RXT does not worsen gastrointestinal manifestations in SSc-ILD.

Keywords: systemic sclerosis, dysphagia, rituximab, gastrointestinal manifestations

Procedia PDF Downloads 76
8506 Chatbots as Language Teaching Tools for L2 English Learners

Authors: Feiying Wu

Abstract:

Chatbots are computer programs that attempt to engage a human in a dialogue, which originated in the 1960s with MIT's Eliza. However, they have become widespread more recently as advances in language technology have produced chatbots with increasing linguistic quality and sophistication, leading to their potential to serve as a tool for Computer-Assisted Language Learning(CALL). The aim of this article is to assess the feasibility of using two chatbots, Mitsuku and CleverBot, as pedagogical tools for learning English as a second language by stimulating L2 learners with distinct English proficiencies. Speaking of the input of stimulated learners, they are measured by AntWordProfiler to match the user's expected vocabulary proficiency. Totally, there are four chat sessions as each chatbot will converse with both beginners and advanced learners. For evaluation, it focuses on chatbots' responses from a linguistic standpoint, encompassing vocabulary and sentence levels. The vocabulary level is determined by the vocabulary range and the reaction to misspelled words. Grammatical accuracy and responsiveness to poorly formed sentences are assessed for the sentence level. In addition, the assessment of this essay sets 25% lexical and grammatical incorrect input to determine chatbots' corrective ability towards different linguistic forms. Based on statistical evidence and illustration of examples, despite the small sample size, neither Mitsuku nor CleverBot is ideal as educational tools based on their performance through word range, grammatical accuracy, topic range, and corrective feedback for incorrect words and sentences, but rather as a conversational tool for beginners of L2 English.

Keywords: chatbots, CALL, L2, corrective feedback

Procedia PDF Downloads 73
8505 Increasing the Frequency of Laser Impulses with Optical Choppers with Rotational Shafts

Authors: Virgil-Florin Duma, Dorin Demian

Abstract:

Optical choppers are among the most common optomechatronic devices, utilized in numerous applications, from radiometry to telescopes and biomedical imaging. The classical configuration has a rotational disk with windows with linear margins. This research points out the laser signals that can be obtained with these classical choppers, as well as with another, novel, patented configuration, of eclipse choppers (i.e., with rotational disks with windows with non-linear margins, oriented outwards or inwards). Approximately triangular laser signals can be obtained with eclipse choppers, in contrast to the approximately sinusoidal – with classical devices. The main topic of this work refers to another, novel device, of choppers with shafts of different shapes and with slits of various profiles (patent pending). A significant improvement which can be obtained (with regard to disk choppers) refers to the chop frequencies of the laser signals. Thus, while 1 kHz is their typical limit for disk choppers, with choppers with shafts, a more than 20 times increase in the chop frequency can be obtained with choppers with shafts. Their transmission functions are also discussed, for different types of laser beams. Acknowledgments: This research is supported by the Romanian National Authority for Scientific Research, through the project PN-III-P2-2.1-BG-2016-0297.

Keywords: laser signals, laser systems, optical choppers, optomechatronics, transfer functions, eclipse choppers, choppers with shafts

Procedia PDF Downloads 184