Search results for: automatic calibration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1288

Search results for: automatic calibration

208 An Analysis of the Representation of the Translator and Translation Process into Brazilian Social Networking Groups

Authors: Érica Lima

Abstract:

In the digital era, in which we have an avalanche of information, it is not new that the Internet has brought new modes of communication and knowledge access. Characterized by the multiplicity of discourses, opinions, beliefs and cultures, the web is a space of political-ideological dimensions where people (who often do not know each other) interact and create representations, deconstruct stereotypes, and redefine identities. Currently, the translator needs to be able to deal with digital spaces ranging from specific software to social media, which inevitably impact on his professional life. One of the most impactful ways of being seen in cyberspace is the participation in social networking groups. In addition to its ability to disseminate information among participants, social networking groups allow a significant personal and social exposure. Such exposure is due to the visibility of each participant achieved not only on its personal profile page, but also in each comment or post the person makes in the groups. The objective of this paper is to study the representations of translators and translation process on the Internet, more specifically in publications in two Brazilian groups of great influence on the Facebook: "Translators/Interpreters" and "Translators, Interpreters and Curious". These chosen groups represent the changes the network has brought to the profession, including the way translators are seen and see themselves. The analyzed posts allowed a reading of what common sense seems to think about the translator as opposed to what the translators seem to think about themselves as a professional class. The results of the analysis lead to the conclusion that these two positions are antagonistic and sometimes represent conflict of interests: on the one hand, the society in general consider the translator’s work something easy, therefore it is not necessary to be well remunerated; on the other hand, the translators who know how complex a translation process is and how much it takes to be a good professional. The results also reveal that social networking sites such as Facebook provide more visibility, but it takes a more active role from the translator to achieve a greater appreciation of the profession and more recognition of the role of the translator, especially in face of increasingly development of automatic translation programs.

Keywords: Facebook, social representation, translation, translator

Procedia PDF Downloads 128
207 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 105
206 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 57
205 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain

Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae

Abstract:

Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.

Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS

Procedia PDF Downloads 23
204 Queuing Analysis and Optimization of Public Vehicle Transport Stations: A Case of South West Ethiopia Region Vehicle Stations

Authors: Mequanint Birhan

Abstract:

Modern urban environments present a dynamically growing field where, notwithstanding shared goals, several mutually conflicting interests frequently collide. However, it has a big impact on the city's socioeconomic standing, waiting lines and queues are common occurrences. This results in extremely long lines for both vehicles and people on incongruous routes, service coagulation, customer murmuring, unhappiness, complaints, and looking for other options sometimes illegally. The root cause of this is corruption, which leads to traffic jams, stopping, and packing vehicles beyond their safe carrying capacity, and violating the human rights and freedoms of passengers. This study focused on the optimizing time of passengers had to wait in public vehicle stations. This applied research employed both data gathering sources and mixed approaches, then 166 samples of key informants of transport station were taken by using the Slovin sampling formula. The length of time vehicles, including the drivers and auxiliary drivers ‘Weyala', had to wait was also studied. To maximize the service level at vehicle stations, a queuing model was subsequently devised ‘Menaharya’. Time, cost, and quality encompass performance, scope, and suitability for the intended purposes. The minimal response time for passengers and vehicles queuing to reach their final destination at the stations of the Tepi, Mizan, and Bonga towns was determined. A new bus station system was modeled and simulated by Arena simulation software in the chosen study area. 84% improvement on cost reduced by 56.25%, time 4hr to 1.5hr, quality, safety and designed load performance calculations employed. Stakeholders are asked to put the model into practice and monitor the results obtained.

Keywords: Arena 14 automatic rockwell, queue, transport services, vehicle stations

Procedia PDF Downloads 55
203 Identifying Artifacts in SEM-EDS of Fouled RO Membranes Used for the Treatment of Brackish Groundwater Through Raman and ICP-MS Analysis

Authors: Abhishek Soti, Aditya Sharma, Akhilendra Bhushan Gupta

Abstract:

Fouled reverse osmosis membranes are primarily characterized by Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectrometer (EDS) for a detailed investigation of foulants; however, this has severe limitations on several accounts. Apart from inaccuracy in spectral properties and inevitable interferences and interactions between sample and instrument, misidentification of elements due to overlapping peaks is a significant drawback of EDS. This paper discusses this limitation by analyzing fouled polyamide RO membranes derived from community RO plants of Rajasthan treating brackish water via a combination of results obtained from EDS and Raman spectroscopy and cross corroborating with ICP-MS analysis of water samples prepared by dissolving the deposited salts. The anomalous behavior of different morphic forms of CaCO₃ in aqueous suspensions tends to introduce false reporting of the presence of certain heavy metals and rare earth metals in the scales of the fouled RO membranes used for treating brackish groundwater when analyzed using the commonly adopted techniques like SEM-EDS or Raman spectrometry. Peaks of CaCO₃ reflected in EDS spectra of the membrane were found to be misinterpreted as Scandium due to the automatic assignment of elements by the software. Similarly, the morphic forms merged with the dominant peak of CaCO₃ might be reflected as a single peak of Molybdenum in the Raman spectrum. A subsequent ICP-MS analysis of the deposited salts showed that both Sc and Mo were below detectable levels. It is always essential to cross-confirm the results through a destructive analysis method to avoid such interferences. It is further recommended to study different morphic forms of CaCO₃ scales, as they exhibit anomalous properties like reverse solubility with temperature and hence altered precipitation tendencies, for an accurate description of the composition of scales, which is vital for the smooth functioning of RO systems.

Keywords: reverse osmosis, foulant analysis, groundwater, EDS, artifacts

Procedia PDF Downloads 66
202 The Effect of Penalizing Wrong Answers in the Computerized Modified Multiple Choice Testing System

Authors: Min Hae Song, Jooyong Park

Abstract:

Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests. Multiple-choice tests (MC) are efficient in automatic grading, however structural problems of multiple-choice tests allow students to find the correct answer from options even though they do not know the answer. A computerized modified multiple-choice testing system (CMMT) was developed using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. This study was conducted to find out whether penalizing for wrong answers in CMMT could lower random guessing. In this study, we checked whether students knew the answers by having them respond to the short-answer tests before choosing the given options in CMMT or MC format. Ninety-four students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: two conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. In the low penalty condition, the penalty rate was the probability of getting the correct answer by random guessing. In the high penalty condition, students were penalized at twice the percentage of the low penalty condition. The results showed that the number of no response was significantly higher for the CMMT format and the number of random guesses was significantly lower for the CMMT format. There were no significant between the two penalty conditions. This result may be due to the fact that the actual score difference between the two conditions was too small. In the discussion, the possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed.

Keywords: computerized modified multiple choice test format, multiple-choice test format, penalizing, test format

Procedia PDF Downloads 150
201 Analyzing Safety Incidents using the Fatigue Risk Index Calculator as an Indicator of Fatigue within a UK Rail Franchise

Authors: Michael Scott Evans, Andrew Smith

Abstract:

The feeling of fatigue at work could potentially have devastating consequences. The aim of this study was to investigate whether the well-established objective indicator of fatigue – the Fatigue Risk Index (FRI) calculator used by the rail industry is an effective indicator to the number of safety incidents, in which fatigue could have been a contributing factor. The study received ethics approval from Cardiff University’s Ethics Committee (EC.16.06.14.4547). A total of 901 safety incidents were recorded from a single British rail franchise between 1st June 2010 – 31st December 2016, into the Safety Management Information System (SMIS). The safety incident types identified that fatigue could have been a contributing factor were: Signal Passed at Danger (SPAD), Train Protection & Warning System (TPWS) activation, Automatic Warning System (AWS) slow to cancel, failed to call, and station overrun. From the 901 recorded safety incidents, the scheduling system CrewPlan was used to extract the Fatigue Index (FI) score and Risk Index (RI) score of all train drivers on the day of the safety incident. Only the working rosters of 64.2% (N = 578) (550 men and 28 female) ranging in age from 24 – 65 years old (M = 47.13, SD = 7.30) were accessible for analyses. Analysis from all 578 train drivers who were involved in safety incidents revealed that 99.8% (N = 577) of Fatigue Index (FI) scores fell within or below the identified guideline threshold of 45 as well as 97.9% (N = 566) of Risk Index (RI) scores falling below the 1.6 threshold range. Their scores represent good practice within the rail industry. These findings seem to indicate that the current objective indicator, i.e. the FRI calculator used in this study by the British rail franchise was not an effective predictor of train driver’s FI scores and RI scores, as safety incidents in which fatigue could have been a contributing factor represented only 0.2% of FI scores and 2.1% of RI scores. Further research is needed to determine whether there are other contributing factors that could provide a better indication as to why there is such a significantly large proportion of train drivers who are involved in safety incidents, in which fatigue could have been a contributing factor have such low FI and RI scores.

Keywords: fatigue risk index calculator, objective indicator of fatigue, rail industry, safety incident

Procedia PDF Downloads 165
200 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box

Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar

Abstract:

To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.

Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection

Procedia PDF Downloads 113
199 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 77
198 Changes in Blood Pressure in a Longitudinal Cohort of Vietnamese Women

Authors: Anh Vo Van Ha, Yun Zhao, Luat Cong Nguyen, Tan Khac Chu, Phung Hoang Nguyen, Minh Ngoc Pham, Colin W. Binns, Andy H. Lee

Abstract:

This study aims to study longitudinal changes in blood pressure (BP) during the 1-year postpartum period and to evaluate the influence of parity, maternal age at delivery, prepregnancy BMI, gestational weight gain, gestational age at delivery and postpartum maternal weight. A prospective longitudinal cohort study of 883 singleton Vietnamese women was conducted in Hanoi, Haiphong, and Ho Chi Minh City, Vietnam during 2015-2017. Women diagnosed with gestational diabetes mellitus at 24-28 weeks of gestation, pre-eclampsia, and hypoglycemia was excluded from analysis. BP was repeatedly measured at discharge, 6 and 12 months postpartum using automatic blood pressure monitors. Linear mixed model with repeated measures was used to describe changes occurring during pregnancy to 1-year postpartum. Parity, self-reported prepregnancy BMI, gestational weight gain, maternal age and gestational age at delivery will be treated as time-invariant variables and measured maternal weight will be treated as a time-varying variable in models. Women with higher measured postpartum weight had higher mean systolic blood pressure (SBP), 0.20 mmHg, 95% CI [0.12, 0.28]. Similarly, women with higher measured postpartum weight had higher mean diastolic blood pressure (DBP), 0.15 mmHg, 95% CI [0.08, 0.23]. These differences were both statistically significant, P < 0.001. There were no differences in SBP and DBP depending on parity, maternal age at delivery, prepregnancy BMI, gestational weight gain and gestational age at delivery. Compared with discharge measurement, SBP was significantly higher in 6 months postpartum, 6.91 mmHg, 95% CI [6.22, 7.59], and 12 months postpartum, 6.39 mmHg, 95% CI [5.64, 7.15]. Similarly, DBP was also significantly higher in 6 and months postpartum than at discharge, 10.46 mmHg 95% CI [9.75, 11.17], and 11.33 mmHg 95% CI [10.54, 12.12]. In conclusion, BP measured repeatedly during the postpartum period (6 and 12 months postpartum) showed a statistically significant increase, compared with after discharge from the hospital. Maternal weight was a significant predictor of postpartum blood pressure over the 1-year postpartum period.

Keywords: blood pressure, maternal weight, postpartum, Vietnam

Procedia PDF Downloads 183
197 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 48
196 Vulnerability Assessment of Groundwater Quality Deterioration Using PMWIN Model

Authors: A. Shakoor, M. Arshad

Abstract:

The utilization of groundwater resources in irrigation has significantly increased during the last two decades due to constrained canal water supplies. More than 70% of the farmers in the Punjab, Pakistan, depend directly or indirectly on groundwater to meet their crop water demands and hence, an unchecked paradigm shift has resulted in aquifer depletion and deterioration. Therefore, a comprehensive research was carried at central Punjab-Pakistan, regarding spatiotemporal variation in groundwater level and quality. Processing MODFLOW for window (PMWIN) and MT3D (solute transport model) models were used for existing and future prediction of groundwater level and quality till 2030. The comprehensive data set of aquifer lithology, canal network, groundwater level, groundwater salinity, evapotranspiration, groundwater abstraction, recharge etc. were used in PMWIN model development. The model was thus, successfully calibrated and validated with respect to groundwater level for the periods of 2003 to 2007 and 2008 to 2012, respectively. The coefficient of determination (R2) and model efficiency (MEF) for calibration and validation period were calculated as 0.89 and 0.98, respectively, which argued a high level of correlation between the calculated and measured data. For solute transport model (MT3D), the values of advection and dispersion parameters were used. The model used for future scenario up to 2030, by assuming that there would be no uncertain change in climate and groundwater abstraction rate would increase gradually. The model predicted results revealed that the groundwater would decline from 0.0131 to 1.68m/year during 2013 to 2030 and the maximum decline would be on the lower side of the study area, where infrastructure of canal system is very less. This lowering of groundwater level might cause an increase in the tubewell installation and pumping cost. Similarly, the predicted total dissolved solids (TDS) of the groundwater would increase from 6.88 to 69.88mg/L/year during 2013 to 2030 and the maximum increase would be on lower side. It was found that in 2030, the good quality would reduce by 21.4%, while marginal and hazardous quality water increased by 19.28 and 2%, respectively. It was found from the simulated results that the salinity of the study area had increased due to the intrusion of salts. The deterioration of groundwater quality would cause soil salinity and ultimately the reduction in crop productivity. It was concluded from the predicted results of groundwater model that the groundwater deteriorated with the depth of water table i.e. TDS increased with declining groundwater level. It is recommended that agronomic and engineering practices i.e. land leveling, rainwater harvesting, skimming well, ASR (Aquifer Storage and Recovery Wells) etc. should be integrated to meliorate management of groundwater for higher crop production in salt affected soils.

Keywords: groundwater quality, groundwater management, PMWIN, MT3D model

Procedia PDF Downloads 353
195 Developing a Self-Healing Concrete Filler Using Poly(Methyl Methacrylate) Based Two-Part Adhesive

Authors: Shima Taheri, Simon Clark

Abstract:

Concrete is an essential building material used in the majority of structures. Degradation of concrete over time increases the life-cycle cost of an asset with an estimated annual cost of billions of dollars to national economies. Most of the concrete failure occurs due to cracks, which propagate through a structure and cause weakening leading to failure. Stopping crack propagation is thus the key to protecting concrete structures from failure and is the best way to prevent inconveniences and catastrophes. Furthermore, the majority of cracks occur deep within the concrete in inaccessible areas and are invisible to normal inspection. Few materials intrinsically possess self-healing ability, but one that does is concrete. However, self-healing in concrete is limited to small dormant cracks in a moist environment and is difficult to control. In this project, we developed a method for self-healing of nascent fractures in concrete components through the automatic release of self-curing healing agents encapsulated in breakable nano- and micro-structures. The Poly(methyl methacrylate) (PMMA) based two-part adhesive is encapsulated in core-shell structures with brittle/weak inert shell, synthesized via miniemulsion/solvent evaporation polymerization. Stress fields associated with propagating cracks can break these capsules releasing the healing agents at the point where they are needed. The shell thickness is playing an important role in preserving the content until the final setting of concrete. The capsules can also be surface functionalized with carboxyl groups to overcome the homogenous mixing issues. Currently, this formulated self-healing system can replace up to 1% of cement in a concrete formulation. Increasing this amount to 5-7% in the concrete formulation without compromising compression strength and shrinkage properties, is still under investigation. This self-healing system will not only increase the durability of structures by stopping crack propagation but also allow the use of less cement in concrete construction, thereby adding to the global effort for CO2 emission reduction.

Keywords: self-healing concrete, concrete crack, concrete deterioration, durability

Procedia PDF Downloads 96
194 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 125
193 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 489
192 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 306
191 Preliminary Analysis on the Distribution of Elements in Cannabis

Authors: E. Zafeiraki, P. Nisianakis, K. Machera

Abstract:

Cannabis plant contains 113 cannabinoids and it is commonly known for its psychoactive substance tetrahydrocannabinol or as a source of narcotic substances. The recent years’ cannabis cultivation also increases due to its wide use both for medical and industrial purposes as well as for uses as para-pharmaceuticals, cosmetics and food commodities. Depending on the final product, different parts of the plant are utilized, with the leaves and bud (seeds) being the most frequently used. Cannabis can accumulate various contaminants, including heavy metals, both from the soil and the water in which the plant grows. More specifically, metals may occur naturally in the soil and water, or they can enter into the environment through fertilizers, pesticides and fungicides that are commonly applied to crops. The high probability of metals accumulation in cannabis, combined with the latter growing use, raise concerns about the potential health effects in humans and consequently lead to the need for the implementation of safety measures for cannabis products, such as guidelines for regulating contaminants, including metals, and especially the ones characterized by high toxicity in cannabis. Acknowledging the above, the aim of the current study was first to investigate metals contamination in cannabis samples collected from Greece, and secondly to examine potential differences in metals accumulation among the different parts of the plant. To our best knowledge, this is the first study presenting information on elements in cannabis cultivated in Greece, and also on the distribution pattern of the former in the plant body. To this end, the leaves and the seeds of all the samples were initially separated and dried and then digested with Nitric acid (HNO₃) and Hydrochloric acid (HCl). For the analysis of these samples, an Inductive Coupled Plasma-Mass Spectrometry (ICP-MS) method was developed, able to quantify 28 elements. Internal standards were added at a constant rate and concentration to all calibration standards and unknown samples, while two certified reference materials were analyzed in every batch to ensure the accuracy of the measurements. The repeatability of the method and the background contamination were controlled by the analysis of quality control (QC) standards and blank samples in every sequence, respectively. According to the results, essential metals, such as Ca, Zn and Mg, were detected at high levels. On the contrary, the concentration of high toxicity metals, like As (average: 0.10ppm), Pb (average: 0.36ppm), Cd (average: 0.04ppm), and Hg (average: 0.012ppm) were very low in all the samples, indicating that no harmful effects on human health can be caused by the analyzed samples. Moreover, it appears that the pattern of contamination of metals is very similar in all the analyzed samples, which could be attributed to the same origin of the analyzed cannabis, i.e., the common soil composition, use of fertilizers, pesticides, etc. Finally, as far as the distribution pattern between the different parts of the plant is concerned, it was revealed that leaves present a higher concentration in comparison to seeds for all metals examined.

Keywords: cannabis, heavy metals, ICP-MS, leaves and seeds, elements

Procedia PDF Downloads 82
190 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation

Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi

Abstract:

In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.

Keywords: biomass burning, photochemical chamber, particulate matter, tracers

Procedia PDF Downloads 157
189 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 318
188 Automatic Generation of Census Enumeration Area and National Sampling Frame to Achieve Sustainable Development Goals

Authors: Sarchil H. Qader, Andrew Harfoot, Mathias Kuepie, Sabrina Juran, Attila Lazar, Andrew J. Tatem

Abstract:

The need for high-quality, reliable, and timely population data, including demographic information, to support the achievement of the sustainable development goals (SDGs) in all countries was recognized by the United Nations' 2030 Agenda for sustainable development. However, many low and middle-income countries lack reliable and recent census data. To achieve reliable and accurate census and survey outputs, up-to-date census enumeration areas and digital national sampling frames are critical. Census enumeration areas (EAs) are the smallest geographic units for collection, disseminating, and analyzing census data and are often used as a national sampling frame to serve various socio-economic surveys. Even for countries that are wealthy and stable, creating and updating EAs is a difficult yet crucial step in preparing for a national census. Such a process is commonly done manually, either by digitizing small geographic units on high-resolution satellite imagery or walking the boundaries of units, both of which are extremely expensive. We have developed a user-friendly tool that could be employed to generate draft EA boundaries automatically. The tool is based on high-resolution gridded population and settlement datasets, GPS household locations, building footprints and uses publicly available natural, man-made and administrative boundaries. Initial outputs were produced in Burkina Faso, Paraguay, Somalia, Togo, Niger, Guinea, and Zimbabwe. The results indicate that the EAs are in line with international standards, including boundaries that are easily identifiable and follow ground features, have no overlaps, are compact and free of pockets and disjoints, and the boundaries are nested within administrative boundaries.

Keywords: enumeration areas, national sampling frame, gridded population data, preEA tool

Procedia PDF Downloads 115
187 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 118
186 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 292
185 Artificial Intelligence Protecting Birds against Collisions with Wind Turbines

Authors: Aleksandra Szurlej-Kielanska, Lucyna Pilacka, Dariusz Górecki

Abstract:

The dynamic development of wind energy requires the simultaneous implementation of effective systems minimizing the risk of collisions between birds and wind turbines. Wind turbines are installed in more and more challenging locations, often close to the natural environment of birds. More and more countries and organizations are defining guidelines for the necessary functionality of such systems. The minimum bird detection distance, trajectory tracking, and shutdown time are key factors in eliminating collisions. Since 2020, we have continued the survey on the validation of the subsequent version of the BPS detection and reaction system. Bird protection system (BPS) is a fully automatic camera system which allows one to estimate the distance of the bird to the turbine, classify its size and autonomously undertake various actions depending on the bird's distance and flight path. The BPS was installed and tested in a real environment at a wind turbine in northern Poland and Central Spain. The performed validation showed that at a distance of up to 300 m, the BPS performs at least as well as a skilled ornithologist, and large bird species are successfully detected from over 600 m. In addition, data collected by BPS systems installed in Spain showed that 60% of the detections of all birds of prey were from individuals approaching the turbine, and these detections meet the turbine shutdown criteria. Less than 40% of the detections of birds of prey took place at wind speeds below 2 m/s while the turbines were not working. As shown by the analysis of the data collected by the system over 12 months, the system classified the improved size of birds with a wingspan of more than 1.1 m in 90% and the size of birds with a wingspan of 0.7 - 1 m in 80% of cases. The collected data also allow the conclusion that some species keep a certain distance from the turbines at a wind speed of over 8 m/s (Aquila sp., Buteo sp., Gyps sp.), but Gyps sp. and Milvus sp. remained active at this wind speed on the tested area. The data collected so far indicate that BPS is effective in detecting and stopping wind turbines in response to the presence of birds of prey with a wingspan of more than 1 m.

Keywords: protecting birds, birds monitoring, wind farms, green energy, sustainable development

Procedia PDF Downloads 51
184 Analysis of the Impact of Suez Canal on the Robustness of Global Shipping Networks

Authors: Zimu Li, Zheng Wan

Abstract:

The Suez Canal plays an important role in global shipping networks and is one of the most frequently used waterways in the world. The 2021 canal obstruction by ship Ever Given in March 2021, however, completed blocked the Suez Canal for a week and caused significant disruption to world trade. Therefore, it is very important to quantitatively analyze the impact of the accident on the robustness of the global shipping network. However, the current research on maritime transportation networks is usually limited to local or small-scale networks in a certain region. Based on the complex network theory, this study establishes a global shipping complex network covering 2713 nodes and 137830 edges by using the real trajectory data of the global marine transport ship automatic identification system in 2018. At the same time, two attack modes, deliberate (Suez Canal Blocking) and random, are defined to calculate the changes in network node degree, eccentricity, clustering coefficient, network density, network isolated nodes, betweenness centrality, and closeness centrality under the two attack modes, and quantitatively analyze the actual impact of Suez Canal Blocking on the robustness of global shipping network. The results of the network robustness analysis show that Suez Canal blocking was more destructive to the shipping network than random attacks of the same scale. The network connectivity and accessibility decreased significantly, and the decline decreased with the distance between the port and the canal, showing the phenomenon of distance attenuation. This study further analyzes the impact of the blocking of the Suez Canal on Chinese ports and finds that the blocking of the Suez Canal significantly interferes withChina's shipping network and seriously affects China's normal trade activities. Finally, the impact of the global supply chain is analyzed, and it is found that blocking the canal will seriously damage the normal operation of the global supply chain.

Keywords: global shipping networks, ship AIS trajectory data, main channel, complex network, eigenvalue change

Procedia PDF Downloads 147
183 Fire Risk Information Harmonization for Transboundary Fire Events between Portugal and Spain

Authors: Domingos Viegas, Miguel Almeida, Carmen Rocha, Ilda Novo, Yolanda Luna

Abstract:

Forest fires along the more than 1200km of the Spanish-Portuguese border are more and more frequent, currently achieving around 2000 fire events per year. Some of these events develop to large international wildfire requiring concerted operations based on shared information between the two countries. The fire event of Valencia de Alcantara (2003) causing several fatalities and more than 13000ha burnt, is a reference example of these international events. Currently, Portugal and Spain have a specific cross-border cooperation protocol on wildfires response for a strip of about 30km (15 km for each side). It is recognized by public authorities the successfulness of this collaboration however it is also assumed that this cooperation should include more functionalities such as the development of a common risk information system for transboundary fire events. Since Portuguese and Spanish authorities use different approaches to determine the fire risk indexes inputs and different methodologies to assess the fire risk, sometimes the conjoint firefighting operations are jeopardized since the information is not harmonized and the understanding of the situation by the civil protection agents from both countries is not unique. Thus, a methodology aiming the harmonization of the fire risk calculation and perception by Portuguese and Spanish Civil protection authorities is hereby presented. The final results are presented as well. The fire risk index used in this work is the Canadian Fire Weather Index (FWI), which is based on meteorological data. The FWI is limited on its application as it does not take into account other important factors with great effect on the fire appearance and development. The combination of these factors is very complex since, besides the meteorology, it addresses several parameters of different topics, namely: sociology, topography, vegetation and soil cover. Therefore, the meaning of FWI values is different from region to region, according the specific characteristics of each region. In this work, a methodology for FWI calibration based on the number of fire occurrences and on the burnt area in the transboundary regions of Portugal and Spain, in order to assess the fire risk based on calibrated FWI values, is proposed. As previously mentioned, the cooperative firefighting operations require a common perception of the information shared. Therefore, a common classification of the fire risk for the fire events occurred in the transboundary strip is proposed with the objective of harmonizing this type of information. This work is integrated in the ECHO project SpitFire - Spanish-Portuguese Meteorological Information System for Transboundary Operations in Forest Fires, which aims the development of a web platform for the sharing of information and supporting decision tools to be used in international fire events involving Portugal and Spain.

Keywords: data harmonization, FWI, international collaboration, transboundary wildfires

Procedia PDF Downloads 230
182 Off-Body Sub-GHz Wireless Channel Characterization for Dairy Cows in Barns

Authors: Said Benaissa, David Plets, Emmeric Tanghe, Jens Trogh, Luc Martens, Leen Vandaele, Annelies Van Nuffel, Frank A. M. Tuyttens, Bart Sonck, Wout Joseph

Abstract:

The herd monitoring and managing - in particular the detection of ‘attention animals’ that require care, treatment or assistance is crucial for effective reproduction status, health, and overall well-being of dairy cows. In large sized farms, traditional methods based on direct observation or analysis of video recordings become labour-intensive and time-consuming. Thus, automatic monitoring systems using sensors have become increasingly important to continuously and accurately track the health status of dairy cows. Wireless sensor networks (WSNs) and internet-of-things (IoT) can be effectively used in health tracking of dairy cows to facilitate herd management and enhance the cow welfare. Since on-cow measuring devices are energy-constrained, a proper characterization of the off-body wireless channel between the on-cow sensor nodes and the back-end base station is required for a power-optimized deployment of these networks in barns. The aim of this study was to characterize the off-body wireless channel in indoor (barns) environment at 868 MHz using LoRa nodes. LoRa is an emerging wireless technology mainly targeted at WSNs and IoT networks. Both large scale fading (i.e., path loss) and temporal fading were investigated. The obtained path loss values as a function of the transmitter-receiver separation were well fitted by a lognormal path loss model. The path loss showed an additional increase of 4 dB when the wireless node was actually worn by the cow. The temporal fading due to movement of other cows was well described by Rician distributions with a K-factor of 8.5 dB. Based on this characterization, network planning and energy consumption optimization of the on-body wireless nodes could be performed, which enables the deployment of reliable dairy cow monitoring systems.

Keywords: channel, channel modelling, cow monitoring, dairy cows, health monitoring, IoT, LoRa, off-body propagation, PLF, propagation

Procedia PDF Downloads 291
181 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 209
180 Development of a Robot Assisted Centrifugal Casting Machine for Manufacturing Multi-Layer Journal Bearing and High-Tech Machine Components

Authors: Mohammad Syed Ali Molla, Mohammed Azim, Mohammad Esharuzzaman

Abstract:

Centrifugal-casting machine is used in manufacturing special machine components like multi-layer journal bearing used in all internal combustion engine, steam, gas turbine and air craft turboengine where isotropic properties and high precisions are desired. Moreover, this machine can be used in manufacturing thin wall hightech machine components like cylinder liners and piston rings of IC engine and other machine parts like sleeves, and bushes. Heavy-duty machine component like railway wheel can also be prepared by centrifugal casting. A lot of technological developments are required in casting process for production of good casted machine body and machine parts. Usually defects like blowholes, surface roughness, chilled surface etc. are found in sand casted machine parts. But these can be removed by centrifugal casting machine using rotating metallic die. Moreover, die rotation, its temperature control, and good pouring practice can contribute to the quality of casting because of the fact that the soundness of a casting in large part depends upon how the metal enters into the mold or dies and solidifies. Poor pouring practice leads to variety of casting defects such as temperature loss, low quality casting, excessive turbulence, over pouring etc. Besides these, handling of molten metal is very unsecured and dangerous for the workers. In order to get rid of all these problems, the need of an automatic pouring device arises. In this research work, a robot assisted pouring device and a centrifugal casting machine are designed, developed constructed and tested experimentally which are found to work satisfactorily. The robot assisted pouring device is further modified and developed for using it in actual metal casting process. Lot of settings and tests are required to control the system and ultimately it can be used in automation of centrifugal casting machine to produce high-tech machine parts with desired precision.

Keywords: bearing, centrifugal casting, cylinder liners, robot

Procedia PDF Downloads 387
179 A Laser Instrument Rapid-E+ for Real-Time Measurements of Airborne Bioaerosols Such as Bacteria, Fungi, and Pollen

Authors: Minghui Zhang, Sirine Fkaier, Sabri Fernana, Svetlana Kiseleva, Denis Kiselev

Abstract:

The real-time identification of bacteria and fungi is difficult because they emit much weaker signals than pollen. In 2020, Plair developed Rapid-E+, which extends abilities of Rapid-E to detect smaller bioaerosols such as bacteria and fungal spores with diameters down to 0.3 µm, while keeping the similar or even better capability for measurements of large bioaerosols like pollen. Rapid-E+ enables simultaneous measurements of (1) time-resolved, polarization and angle dependent Mie scattering patterns, (2) fluorescence spectra resolved in 16 channels, and (3) fluorescence lifetime of individual particles. Moreover, (4) it provides 2D Mie scattering images which give the full information on particle morphology. The parameters of every single bioaerosol aspired into the instrument are subsequently analysed by machine learning. Firstly, pure species of microbes, e.g., Bacillus subtilis (a species of bacteria), and Penicillium chrysogenum (a species of fungal spores), were aerosolized in a bioaerosol chamber for Rapid-E+ training. Afterwards, we tested microbes under different concentrations. We used several steps of data analysis to classify and identify microbes. All single particles were analysed by the parameters of light scattering and fluorescence in the following steps. (1) They were treated with a smart filter block to get rid of non-microbes. (2) By classification algorithm, we verified the filtered particles were microbes based on the calibration data. (3) The probability threshold (defined by the user) step provides the probability of being microbes ranging from 0 to 100%. We demonstrate how Rapid-E+ identified simultaneously microbes based on the results of Bacillus subtilis (bacteria) and Penicillium chrysogenum (fungal spores). By using machine learning, Rapid-E+ achieved identification precision of 99% against the background. The further classification suggests the precision of 87% and 89% for Bacillus subtilis and Penicillium chrysogenum, respectively. The developed algorithm was subsequently used to evaluate the performance of microbe classification and quantification in real-time. The bacteria and fungi were aerosolized again in the chamber with different concentrations. Rapid-E+ can classify different types of microbes and then quantify them in real-time. Rapid-E+ enables classifying different types of microbes and quantifying them in real-time. Rapid-E+ can identify pollen down to species with similar or even better performance than the previous version (Rapid-E). Therefore, Rapid-E+ is an all-in-one instrument which classifies and quantifies not only pollen, but also bacteria and fungi. Based on the machine learning platform, the user can further develop proprietary algorithms for specific microbes (e.g., virus aerosols) and other aerosols (e.g., combustion-related particles that contain polycyclic aromatic hydrocarbons).

Keywords: bioaerosols, laser-induced fluorescence, Mie-scattering, microorganisms

Procedia PDF Downloads 67