Search results for: fault detector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 947

Search results for: fault detector

107 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm

Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou

Abstract:

Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and WOB are used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036m3/h and -2.374m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. Quantitatively calculate the best combination of funnel viscosity, final shear force and drilling time. The minimum loss rate of lost circulation wells in Shunbei area is 10m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.

Keywords: drilling and completion, drilling fluid, lost circulation, loss rate, main controlling factors, unmanned intervention algorithm

Procedia PDF Downloads 89
106 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability

Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto

Abstract:

Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.

Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT

Procedia PDF Downloads 774
105 Persistence of Ready Mix (Chlorpyriphos 50% + Cypermethrin 5%), Cypermethrin and Chlorpyriphos in Soil under Okra Fruits

Authors: Samriti Wadhwa, Beena Kumari

Abstract:

Background and Significance: Residue levels of ready mix (chlorpyriphos 50% and cypermethrin 5%), cypermethrin and chlorpyriphos individually in sandy loam soil under okra fruits (Variety, Varsha Uphar) were determined; a field experiment was conducted at Research Farm of Department of Entomology of Chaudhary Charan Singh Haryana Agriculture University, Hisar, Haryana, India. Persistence behavior of cypermethrin and chlorpyriphos was studied following application of a pre-mix formulation of insecticides viz. Action-505EC, chlorpyriphos (Radar 20 EC) and cypermethrin (Cyperkill 10 EC) at the recommended dose and double the recommended dose along with control at fruiting stage. Pesticide application also leads to decline in soil acarine fauna which is instrumental in the breakdown of the litter because of which minerals are released into the soil. So, by this study, one can evaluate the safety of pesticides for the soil health. Methodology: Action-505EC (chlorpyriphos 50% and cypermethrin 5%) at 275 g a .i. ha⁻¹ (single dose) and 550 g a. i. ha⁻¹ (double dose), chlorpyriphos (Radar 20 EC) at 200 g a. i. ha⁻¹ (single dose) and 400 g a. i. ha⁻¹ (double dose) and cypermethrin (Cyperkill 10 EC) at 50 g a. i. ha⁻¹ (single dose) and 100 g a. i. ha⁻¹ (double dose) were applied at the fruiting stage on okra crop. Samples of soils from okra field were collected periodically at 0 (1h after spray), 1, 3, 5, 7, 10, 15 days and at harvest after application as well of control soil sample. After air drying, adsorbing through Florisil and activated charcoal and eluting with hexane: acetone (9:1) then residues in soils were estimated by a gas chromatograph equipped with a capillary column and electron capture detector. Results: No persistence of cypermethrin in ready-mix in soil under okra fruits at single and double dose was observed. In case of chlorpyriphos in ready-mix, average initial deposits on 0 (1 h after treatment) day was 0.015 mg kg⁻¹ and 0.036 mg kg⁻¹ which persisted up to 5 days and up to 7 days for single and double dose, respectively. After that residues reached below a detectable level of 0.010 mg kg⁻¹. Experimental studies on cypermethrin individually revealed that average initial deposits on 0 (1 h after treatment) were 0.008 mg kg⁻¹ and 0.012 mg kg⁻¹ which persisted up to 3 days and 5 days for single and double dose, respectively after that residues reached to below detectable level. The initial deposits of chlorpyriphos individually in soil were found to be 0.055 mg kg⁻¹ and 0.113 mg kg⁻¹ which persisted up to 7 days and 10 days at a lower dose and higher dose, respectively after that residues reached to below determination level. Conclusion: In soil under okra crop, only individual cypermethrin in both the doses persisted whereas no persistence of cypermethrin in ready-mix was observed. Persistence of chlorpyriphos individually is more as compared to chlorpyriphos in ready-mix in both the doses. Overall, the persistence of chlorpyriphos in soil under okra crop is more than cypermethrin.

Keywords: chlorpyriphos, cypermethrin, okra, ready mix, soil

Procedia PDF Downloads 143
104 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas

Abstract:

The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.

Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm

Procedia PDF Downloads 68
103 Biogas Production from Kitchen Waste for a Household Sustainability

Authors: Vuiswa Lucia Sethunya, Tonderayi Matambo, Diane Hildebrandt

Abstract:

South African’s informal settlements produce tonnes of kitchen waste (KW) per year which is dumped into the landfill. These landfill sites are normally located in close proximity to the household of the poor communities; this is a problem in which the young children from those communities end up playing in these landfill sites which may result in some health hazards because of methane, carbon dioxide and sulphur gases which are produced. To reduce this large amount of organic materials being deposited into landfills and to provide a cleaner place for those within the community especially the children, an energy conversion process such as anaerobic digestion of the organic waste to produce biogas was implemented. In this study, the digestion of various kitchen waste was investigated in order to understand and develop a system that is suitable for household use to produce biogas for cooking. Three sets of waste of different nutritional compositions were digested as per acquired in the waste streams of a household at mesophilic temperature (35ᵒC). These sets of KW were co-digested with cow dung (CW) at different ratios to observe the microbial behaviour and the system’s stability in a laboratory scale system. The gas chromatography-flame ionization detector analyses have been performed to identify and quantify the presence of organic compounds in the liquid samples from co-digested and mono-digested food waste. Acetic acid, propionic acid, butyric acid and valeric acid are the fatty acids which were studied. Acetic acid (1.98 g/L), propionic acid (0.75 g/L) and butyric acid (2.16g/L) were the most prevailing fatty acids. The results obtained from organic acids analysis suggest that the KW can be an innovative substituent to animal manure for biogas production. The faster degradation period in which the microbes break down the organic compound to produce the fatty acids during the anaerobic process of KW also makes it a better feedstock during high energy demand periods. The C/N ratio analysis showed that from the three waste streams the first stream containing vegetables (55%), fruits (16%), meat (25%) and pap (4%) yielded more methane-based biogas of 317mL/g of volatile solids (VS) at C/N of 21.06. Generally, this shows that a household will require a heterogeneous composition of nutrient-based waste to be fed into the digester to acquire the best biogas yield to sustain a households cooking needs.

Keywords: anaerobic digestion, biogas, kitchen waste, household

Procedia PDF Downloads 171
102 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria

Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa

Abstract:

In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.

Keywords: contamination, environment, radioactivity, radionuclides

Procedia PDF Downloads 76
101 Mobile and Hot Spot Measurement with Optical Particle Counting Based Dust Monitor EDM264

Authors: V. Ziegler, F. Schneider, M. Pesch

Abstract:

With the EDM264, GRIMM offers a solution for mobile short- and long-term measurements in outdoor areas and at production sites. For research as well as permanent areal observations on a near reference quality base. The model EDM264 features a powerful and robust measuring cell based on optical particle counting (OPC) principle with all the advantages that users of GRIMM's portable aerosol spectrometers are used to. The system is embedded in a compact weather-protection housing with all-weather sampling, heated inlet system, data logger, and meteorological sensor. With TSP, PM10, PM4, PM2.5, PM1, and PMcoarse, the EDM264 provides all fine dust fractions real-time, valid for outdoor applications and calculated with the proven GRIMM enviro-algorithm, as well as six additional dust mass fractions pm10, pm2.5, pm1, inhalable, thoracic and respirable for IAQ and workplace measurements. This highly versatile instrument performs real-time monitoring of particle number, particle size and provides information on particle surface distribution as well as dust mass distribution. GRIMM's EDM264 has 31 equidistant size channels, which are PSL traceable. A high-end data logger enables data acquisition and wireless communication via LTE, WLAN, or wired via Ethernet. Backup copies of the measurement data are stored in the device directly. The rinsing air function, which protects the laser and detector in the optical cell, further increases the reliability and long term stability of the EDM264 under different environmental and climatic conditions. The entire sample volume flow of 1.2 L/min is analyzed by 100% in the optical cell, which assures excellent counting efficiency at low and high concentrations and complies with the ISO 21501-1standard for OPCs. With all these features, the EDM264 is a world-leading dust monitor for precise monitoring of particulate matter and particle number concentration. This highly reliable instrument is an indispensable tool for many users who need to measure aerosol levels and air quality outdoors, on construction sites, or at production facilities.

Keywords: aerosol research, aerial observation, fence line monitoring, wild fire detection

Procedia PDF Downloads 129
100 A Study of Non-Coplanar Imaging Technique in INER Prototype Tomosynthesis System

Authors: Chia-Yu Lin, Yu-Hsiang Shen, Cing-Ciao Ke, Chia-Hao Chang, Fan-Pin Tseng, Yu-Ching Ni, Sheng-Pin Tseng

Abstract:

Tomosynthesis is an imaging system that generates a 3D image by scanning in a limited angular range. It could provide more depth information than traditional 2D X-ray single projection. Radiation dose in tomosynthesis is less than computed tomography (CT). Because of limited angular range scanning, there are many properties depending on scanning direction. Therefore, non-coplanar imaging technique was developed to improve image quality in traditional tomosynthesis. The purpose of this study was to establish the non-coplanar imaging technique of tomosynthesis system and evaluate this technique by the reconstructed image. INER prototype tomosynthesis system contains an X-ray tube, a flat panel detector, and a motion machine. This system could move X-ray tube in multiple directions during the acquisition. In this study, we investigated three different imaging techniques that were 2D X-ray single projection, traditional tomosynthesis, and non-coplanar tomosynthesis. An anthropopathic chest phantom was used to evaluate the image quality. It contained three different size lesions (3 mm, 5 mm and, 8 mm diameter). The traditional tomosynthesis acquired 61 projections over a 30 degrees angular range in one scanning direction. The non-coplanar tomosynthesis acquired 62 projections over 30 degrees angular range in two scanning directions. A 3D image was reconstructed by iterative image reconstruction algorithm (ML-EM). Our qualitative method was to evaluate artifacts in tomosynthesis reconstructed image. The quantitative method was used to calculate a peak-to-valley ratio (PVR) that means the intensity ratio of the lesion to the background. We used PVRs to evaluate the contrast of lesions. The qualitative results showed that in the reconstructed image of non-coplanar scanning, anatomic structures of chest and lesions could be identified clearly and no significant artifacts of scanning direction dependent could be discovered. In 2D X-ray single projection, anatomic structures overlapped and lesions could not be discovered. In traditional tomosynthesis image, anatomic structures and lesions could be identified clearly, but there were many artifacts of scanning direction dependent. The quantitative results of PVRs show that there were no significant differences between non-coplanar tomosynthesis and traditional tomosynthesis. The PVRs of the non-coplanar technique were slightly higher than traditional technique in 5 mm and 8 mm lesions. In non-coplanar tomosynthesis, artifacts of scanning direction dependent could be reduced and PVRs of lesions were not decreased. The reconstructed image was more isotropic uniformity in non-coplanar tomosynthesis than in traditional tomosynthesis. In the future, scan strategy and scan time will be the challenges of non-coplanar imaging technique.

Keywords: image reconstruction, non-coplanar imaging technique, tomosynthesis, X-ray imaging

Procedia PDF Downloads 344
99 Changes in Chromatographically Assessed Fatty Acid Profile during Technology of Dairy Products

Authors: Lina Lauciene, Vaida Andruleviciute, Ingrida Sinkeviciene, Mindaugas Malakauskas, Loreta Serniene

Abstract:

Dairy product manufacturers constantly are looking for new markets for their production. And in most cases, the problem of product compliance with the composition requirements of foreign products is highlighted. This is especially true of the composition of milk fat in dairy products. It is well known that there are many factors such as feeding ratio, season, cow breed, stage of lactation that affect the fatty acid composition in milk. However, there is less evidence on the impact of the technological process on the composition of fatty acids in raw milk and products made from it. In this study the influence of the technological process on fat composition in 82% fat butter, 15% fat curd, 3.6% fat yogurt and 2.5% fat UHT milk was determined. The samples were collected at each stage of production, starting with raw milk and ending with the final product in the Lithuanian milk-processing company. Fatty acids methyl esters were quantified using a GC (Clarus 680, Perkin Elmer) equipped with flame ionization detector (FID) and a capillary column SP-2560, 100 m x 0.25 mm id x 0.20 µm. Fatty acids peaks were identified using Supelco® 37 Component FAME Mix. The concentration of each fatty acid was expressed in percent of the total fatty acid amount. In the case of UHT milk production, it was compared raw milk, cream, milk mixture, and UHT milk but significant differences were not estimated between these stages. Analyzing stages of the yogurt production (raw milk, pasteurized milk, and milk with a starter culture and yogurt), no significant changes were detected between stages as well. A slight difference was observed with C4:0 - a percentage of this fatty acid was less (p=0.053) in the final stage than in milk with the starter culture. During butter production, the composition of fatty acids in raw cream, buttermilk, and butter did not change significantly. Only C14:0 decreased in the butter then compared to buttermilk. The curd fatty acid analysis showed the increase of C6:0, C8:0, C10:0, C11:0, C12:0 C14:0 and C17:0 at the final stage when compared to raw milk, cream, milk mixture, and whey. Meantime the increase of C18:1n9c (in comparison with milk mixture and curd) and C18:2n6c (in comparison with raw milk, milk mixture, and curd) was estimated in cream. The results of this study suggest that the technological process did not affect the composition of fatty acids in UHT milk, yogurt, butter, and curd but had the impact on the concentration of individual fatty acids. In general, all of the fatty acids from the raw milk were converted into the final product, only some of them slightly changed the concentration. Therefore, in order to ensure an appropriate composition of certain fatty acids in the final product, producers must carefully choose the raw milk. Acknowledgment: This research was funded by Lithuanian Ministry of Agriculture (No. MT-17-13).

Keywords: dairy products, fat composition, fatty acids, technological process

Procedia PDF Downloads 151
98 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks

Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun

Abstract:

Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.

Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters

Procedia PDF Downloads 249
97 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling

Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou

Abstract:

The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.

Keywords: Web-BIM, safety management, deep foundation pit, construction

Procedia PDF Downloads 133
96 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 98
95 The Effects of Organizational Apologies for Some Members’ Annoying Behavior on Other Members’ Appraisal of Their Organization

Authors: Chikae Isobe, Toshihiko Souma, Yoshiya Furukawa

Abstract:

In Japan, an organization is sometimes asked for responsibility and apology toward the organization for the annoying behavior of employees, even though the behavior is not relevant to the organization. Our studies have repeatedly shown that it is important for organizational evaluation to organization propose compensatory behavior for such annoying behavior, even though the behavior is not relevant to the organization. In this study, it was examined how such an organizational response (apology) was likely to evaluate by members of the organization who were not related to the annoying behavior. Three independent variables were manipulated that is organization emotion (guilt and shame), compensation (proposal or not), and the relation between organization and the annoying behavior (relate or not). And the effects of organizational identity (high and low) were also examined. We conducted an online survey for 240 participants through a crowdsourcing company. Participants were asked to imagine a situation in which an incident in which some people in your company did not return an important document that they borrowed privately (vs. at work) became the topic of discussion, and the company responded. For the analysis,189 data (111 males and 78 females, mean age = 40.6) were selected. The results of ANOVA of 2 by2 on organizational appraisal, perceived organizational responsibility, and so on were conducted. Organization appraisal by members was also higher when the organization proposed compensatory behavior. In addition, when the annoying behavior was related to their work (than no related), for those who were high in organization identity (than low), organization appraisal was high. The interaction between relatedness and organizational identity was significant. Differences in relatedness between the organization and annoying behavior were significant in those with low organizational identity but not in those with high organizational identity. When the organization stated not taking compensatory action, members were more likely to perceive the organization as responsible for the annoying behavior. However, the interaction results indicated this tendency was limited to when the annoying behavior was not related to the organization. Furthermore, it tended to be perceived as responsible for the organization when the organization made a statement that felt shame for the annoying behavior not related to the organization and would compensate for the annoying behavior. These results indicate that even members of the organization do not consider the organization's compensatory actions to be unjustified. In addition, because those with high organizational identity perceived the organization to be responsible when it showed strong remorse (shame and compensation), they would be a tendency to make judgments that are consistent with organizational judgments. It would be considered that the Japanese have the norm that even if the organization is not at fault for a member's disruptive behavior, it should respond to it.

Keywords: appraisal for organization, annoying behavior, group shame and guilt, compensation, organizational apologies

Procedia PDF Downloads 99
94 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference

Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo

Abstract:

Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.

Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference

Procedia PDF Downloads 231
93 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 108
92 Single Cell Oil of Oleaginous Fungi from Lebanese Habitats as a Potential Feed Stock for Biodiesel

Authors: M. El-haj, Z. Olama, H. Holail

Abstract:

Single cell oils (SCOs) accumulated by oleaginous fungi have emerged as a potential alternative feedstock for biodiesel production. Five fungal strains were isolated from the Lebanese environment namely Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger that have been selected among 39 oleaginous strains for their potential ability to accumulate lipids (lipid content was more than 40% on dry weight basis). Wide variations were recorded in the environmental factors that lead to maximum lipid production by fungi under test and were cultivated under submerged fermentation on medium containing glucose as a carbon source. The maximum lipid production was attained within 6-8 days, at pH range 6-7, 24 to 48 hours age of seed culture, 4 to 6.107 spores/ml inoculum level and 100 ml culture volume. Eleven culture conditions were examined for their significance on lipid production using Plackett-Burman factorial design. Reducing sugars and nitrogen source were the most significant factors affecting lipid production process. Maximum lipid yield was noticed with 15.62, 14.48, 12.75, 13.68 and 20.41g/l for Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger respectively. A verification experiment was carried out to examine model validation and revealed more than 94% validity. The profile of extracted lipids from each fungal isolate was studied using thin layer chromatography (TLC) indicating the presence of monoacylglycerols, diaacylglycerols, free fatty acids, triacylglycerols and sterol esters. The fatty acids profiles were also determined by gas-chromatography coupled with flame ionization detector (GC-FID). Data revealed the presence of significant amount of oleic acid (29-36%), palmitic acid (18-24%), linoleic acid (26.8-35%), and low amount of other fatty acids in the extracted fungal oils which indicate that the fatty acid profiles were quite similar to that of conventional vegetable oil. The cost of lipid production could be further reduced with acid-pretreated lignocellulotic corncob waste, whey and date molasses to be utilized as the raw material for the oleaginous fungi. The results showed that the microbial lipid from the studied fungi was a potential alternative resource for biodiesel production.

Keywords: agro-industrial waste products, biodiesel, fatty acid, single cell oil, Lebanese environment, oleaginous fungi

Procedia PDF Downloads 386
91 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 327
90 Physics-Informed Neural Network for Predicting Strain Demand in Inelastic Pipes under Ground Movement with Geometric and Soil Resistance Nonlinearities

Authors: Pouya Taraghi, Yong Li, Nader Yoosef-Ghodsi, Muntaseer Kainat, Samer Adeeb

Abstract:

Buried pipelines play a crucial role in the transportation of energy products such as oil, gas, and various chemical fluids, ensuring their efficient and safe distribution. However, these pipelines are often susceptible to ground movements caused by geohazards like landslides, fault movements, lateral spreading, and more. Such ground movements can lead to strain-induced failures in pipes, resulting in leaks or explosions, leading to fires, financial losses, environmental contamination, and even loss of human life. Therefore, it is essential to study how buried pipelines respond when traversing geohazard-prone areas to assess the potential impact of ground movement on pipeline design. As such, this study introduces an approach called the Physics-Informed Neural Network (PINN) to predict the strain demand in inelastic pipes subjected to permanent ground displacement (PGD). This method uses a deep learning framework that does not require training data and makes it feasible to consider more realistic assumptions regarding existing nonlinearities. It leverages the underlying physics described by differential equations to approximate the solution. The study analyzes various scenarios involving different geohazard types, PGD values, and crossing angles, comparing the predictions with results obtained from finite element methods. The findings demonstrate a good agreement between the results of the proposed method and the finite element method, highlighting its potential as a simulation-free, data-free, and meshless alternative. This study paves the way for further advancements, such as the simulation-free reliability assessment of pipes subjected to PGD, as part of ongoing research that leverages the proposed method.

Keywords: strain demand, inelastic pipe, permanent ground displacement, machine learning, physics-informed neural network

Procedia PDF Downloads 43
89 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria

Authors: H. M. Liman, Y. M. Suleiman A. A. David

Abstract:

Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.

Keywords: carbon monoxide, climate change emissions, road transportation, vehicular

Procedia PDF Downloads 358
88 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 129
87 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization

Procedia PDF Downloads 121
86 Mineralogy and Fluid Inclusion Study of the Kebbouch South Pb-Zn Deposit, Northwest Tunisia

Authors: Imen Salhi, Salah Bouhlel, Bernrd Lehmann

Abstract:

The Kebbouch South Pb-Zn deposit is located 20 km to the east of El Kef (NW) in the southeastern part of the Triassic diapir belt in the Tunisian Atlas. The deposit is composed of sulfide and non-sulfide zinc-lead ore bodies. The aim of this study is to provide petrographic results, mineralogy, as well as fluid inclusion data of the carbonate-hosted Pb-Zn Kebbouch South deposit. Mineralization forms two major ore types: (1) lenticular dolostones and clay breccias in the contact zone between Triassic and Upper Cretaceous strata;, it consists of small-scale lenticular, strata-or fault-controlled mineralization mainly composed of marcasite, galena, sphalerite, pyrite, and (2) stratiform mineralization in the Bahloul Formation (Upper Cenomanian-Lower Turonian) consisting of framboidal and cubic pyrite, disseminated sphalerite and galena. Non-metalliferous and/or gangue minerals are represented by dolomite, calcite, celestite and quartz. Fluid inclusion petrography study has been carried out on calcite and celestite. Fluid inclusions hosted in celestite are less than 20 µm large and show two types of aqueous inclusions: monophase liquid aqueous inclusions (L), abundant and very small, generally less than 15 µm and liquid-rich two phase inclusions (L+V). The gas phase forms a mobile vapor bubble. Microthermometric analyses of (L+V) fluid inclusions for celestite indicate that the homogenization temperature ranges from 121 to 156°C, and final ice melting temperatures are in the range of – 19 to -9°C corresponding to salinities of 12 to 21 wt% NaCl eq. (L+V) fluid inclusions from calcite are frequently localized along the growth zones; their homogenization temperature ranges from 96 to 164°C with final ice melting temperatures between -16 and -7°C corresponding to salinities of 9 to 19 wt% NaCl eq. According to mineralogical and fluid inclusion studies, mineralization in the Pb – Zn Kebbouch South deposit formed between 96 to 164°C with salinities ranging from 9 to 21 wt% NaCl eq. A contribution of basinal brines in the ore formation of the kebbouch South Pb–Zn deposit is likely. The deposit is part of the family of MVT deposits associated with the salt diapir environment.

Keywords: fluid inclusion, Kebbouch South, mineralogy, MVT deposits, Pb-Zn

Procedia PDF Downloads 231
85 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 128
84 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 183
83 Environmental Photodegradation of Tralkoxydim Herbicide and Its Formulation in Natural Waters

Authors: María José Patiño-Ropero, Manuel Alcamí, Al Mokhtar Lamsabhi, José Luis Alonso-Prados, Pilar Sandín-España

Abstract:

Tralkoxydim, commercialized under different trade names, among them Splendor® (25% active ingredient), is a cyclohexanedione herbicide used in wheat and barley fields for the post-emergence control of annual winter grass weeds. Due to their physicochemical properties, herbicides belonging to this family are known to be susceptible to reaching natural waters, where different degradation pathways can take place. Photolysis represents one of the main routes of abiotic degradation of these herbicides in water. This transformation pathway can lead to the formation of unknown by-products, which could be more toxic and/or persistent than the active substances themselves. Therefore, there is a growing need to understand the science behind such dissipation routes, which is key to estimating the persistence of these compounds and ensuring the accurate assessment of environmental behavior. However, to our best knowledge, any information regarding the photochemical behavior of tralkoxydim under natural conditions in an aqueous environment has not been available till now in the literature. This work has focused on investigating the photochemical behavior of tralkoxydim herbicide and its commercial formulation (Splendor®) in the ultrapure, river and spring water using simulated solar radiation. Besides, the evolution of detected degradation products formed in the samples has been studied. A reversed-phase HPLC-DAD (high-performance liquid chromatography with diode array detector) method was developed to evaluate the kinetic evolution and to obtain the half-lives. In both cases, the degradation rates of active ingredient tralkoxydim in natural waters were lower than in ultrapure water following the order; river water < spring water < ultrapure water, and with first-order half-life values of 5.1 h, 2.7 h and 1.1 h, respectively. These findings indicate that the photolytical behavior of active ingredients is largely affected by the water composition, and these components can exert an internal filter effect. In addition, tralkoxydim herbicide and its formulation showed the same half-lives for each one of the types of water studied, showing that the presence of adjuvants in the commercial formulation has not any effect on the degradation rates of the active ingredient. HPLC-MS (high-performance liquid chromatography with mass spectrometry) experiments were performed to study the by-products deriving from the photodegradation of tralkoxydim in water. Accordingly, three compounds were tentatively identified. These results provide a better understanding of the tralkoxydim herbicide behavior in natural waters and its fate in the environment.

Keywords: by-products, natural waters, photodegradation, tralkoxydim herbicide

Procedia PDF Downloads 68
82 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 164
81 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India

Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma

Abstract:

Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.

Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation

Procedia PDF Downloads 122
80 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran

Authors: S. Mani, M. Kafil, E. Asadi

Abstract:

Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.

Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality

Procedia PDF Downloads 212
79 Coaches Attitudes, Efficacy and Proposed Behaviors towards Athletes with Hidden Disabilities: A Review of Recent Survey Research

Authors: Robbi Beyer, Tiffanye Vargas, Margaret Flores

Abstract:

Within the United States, youths with hidden disabilities (specific learning disabilities, attention deficit hyperactivity disorder, emotional behavioral disorders, mild intellectual disabilities and speech/language disorders) can often be part of the kindergarten through twelfth grade school population. Because individuals with hidden disabilities have no apparent physical disability, learning difficulties may be overlooked and these youths may be mistakenly labeled as unmotivated, or defiant because they don't understand and follow directions, or maintain enough attention to remember and perform. These behaviors are considered especially challenging for youth sport coaches to manage and they often find it difficult to successfully select and deliver effective accommodations for the athletes. These deficits can be remediated and compensated through the use of research-validated strategies and instructional methods. However, while these techniques are commonly included in teacher preparation, they rarely, if ever, are included in coaching preparation. Therefore, the purpose of this presentation is to summarize consecutive research studies that examined coaching education within the United States for youth athletes with hidden disabilities. Each study utilized a questionnaire format to collect data from coaches on attitudes, efficacy and solutions for addressing challenging behaviors. Results indicated that although the majority of coaches’ attitudes were positive and they perceived themselves confident in working with athletes who have hidden disabilities, there were significant differences in the understanding of appropriate teaching strategies and techniques for this population. For example, when asked to describe a videotaped situation of why an athlete is not performing correctly, coaches often found the athlete to be at fault, as opposed to considering the possibility of faulty directions, or the need for accommodations in teaching/coaching style. When considering coaches’ preparation, 83% of participants declared they were inadequately prepared to coach athletes with hidden disabilities and 92% strongly supported improved preparation for coaches. The comprehensive examination of coaches’ perceptions and efficacy in working with youth athletes with hidden disabilities has provided valuable insight and highlights the need for continued research in this area.

Keywords: health, hidden disabilties, physical activity, youth recreational sports

Procedia PDF Downloads 328
78 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 386