Search results for: GIS techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6516

Search results for: GIS techniques

5616 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 150
5615 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques

Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian

Abstract:

Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.

Keywords: data mining, k-means, road traffic accidents, Waze, Weka

Procedia PDF Downloads 386
5614 A Review: Detection and Classification Defects on Banana and Apples by Computer Vision

Authors: Zahow Muoftah

Abstract:

Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed.

Keywords: computer vision, banana, apple, detection, classification

Procedia PDF Downloads 84
5613 Innovative Housing Construction Technologies in Slum Upgrading

Authors: Edmund M. Muthigani

Abstract:

Innovation in the construction industry has been characterized by new products and processes especially in slum upgrading. The need for low cost housing has motivated stakeholders to think outside the box in coming up with solutions. This paper explored innovative construction technologies that have been used in slum upgrading. The main objectives of the paper was to examine innovations in the construction housing sector and to show how incremental derived demand for decent housing has led to adoption of innovative technologies and materials. Systematic literature review was used to review studies on innovative construction technologies in slum upgrading. The review revealed slow process of innovations in the construction industry due to risk aversion by firms and the hesitance to adopt by firms and individuals. Low profit margins in low cost housing and lack of sufficient political support remain the major hurdles to innovative techniques adoption that can actualize right to decent housing. Conventional construction materials have remained unaffordable to many people and this has negated them decent housing. This has necessitated exploration of innovative materials to realize low cost housing. Stabilized soil blocks and sisal-cement roofing blocks are some of the innovative construction materials that have been utilized in slum upgrading. These innovative materials have not only lowered the cost of production of building elements but also eased costs of transport as the raw materials to produce them are readily available in or within the slum sites. Despite their shortcomings in durability and compressive strength, they have proved worthwhile in slum upgrading. Production of innovative construction materials and use of innovative techniques in slum upgrading also provided employment to the locals.

Keywords: construction, housing, innovation, slum, technology

Procedia PDF Downloads 187
5612 Making of Alloy Steel by Direct Alloying with Mineral Oxides during Electro-Slag Remelting

Authors: Vishwas Goel, Kapil Surve, Somnath Basu

Abstract:

In-situ alloying in steel during the electro-slag remelting (ESR) process has already been achieved by the addition of necessary ferroalloys into the electro-slag remelting mold. However, the use of commercially available ferroalloys during ESR processing is often found to be financially less favorable, in comparison with the conventional alloying techniques. However, a process of alloying steel with elements like chromium and manganese using the electro-slag remelting route is under development without any ferrochrome addition. The process utilizes in-situ reduction of refined mineral chromite (Cr₂O₃) and resultant enrichment of chromium in the steel ingot produced. It was established in course of this work that this process can become more advantageous over conventional alloying techniques, both economically and environmentally, for applications which inherently demand the use of the electro-slag remelting process, such as manufacturing of superalloys. A key advantage is the lower overall CO₂ footprint of this process relative to the conventional route of production, storage, and the addition of ferrochrome. In addition to experimentally validating the feasibility of the envisaged reactions, a mathematical model to simulate the reduction of chromium (III) oxide and transfer to chromium to the molten steel droplets was also developed as part of the current work. The developed model helps to correlate the amount of chromite input and the magnitude of chromium alloying that can be achieved through this process. Experiments are in progress to validate the predictions made by this model and to fine-tune its parameters.

Keywords: alloying element, chromite, electro-slag remelting, ferrochrome

Procedia PDF Downloads 205
5611 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms

Authors: Alper Akin, Ibrahim Aydogdu

Abstract:

This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teaching-learning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.

Keywords: optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame

Procedia PDF Downloads 526
5610 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs

Authors: Taysir Soliman

Abstract:

One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.

Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph

Procedia PDF Downloads 173
5609 Rhythm-Reading Success Using Conversational Solfege

Authors: Kelly Jo Hollingsworth

Abstract:

Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.

Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction

Procedia PDF Downloads 144
5608 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study

Authors: Ajit Brindhaban

Abstract:

Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.

Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging

Procedia PDF Downloads 130
5607 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 121
5606 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom

Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr

Abstract:

The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.

Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation

Procedia PDF Downloads 258
5605 A Dynamic Solution Approach for Heart Disease Prediction

Authors: Walid Moudani

Abstract:

The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the coronary heart disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts’ knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.

Keywords: multi-classifier decisions tree, features reduction, dynamic programming, rough sets

Procedia PDF Downloads 395
5604 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.

Keywords: autism disease, neural network, CPU, GPU, transfer learning

Procedia PDF Downloads 96
5603 Effect of Plasma Treatment on UV Protection Properties of Fabrics

Authors: Sheila Shahidi

Abstract:

UV protection by fabrics has recently become a focus of great interest, particularly in connection with environmental degradation or ozone layer depletion. Fabrics provide simple and convenient protection against UV radiation (UVR), but not all fabrics offer sufficient UV protection. To describe the degree of UVR protection offered by clothing materials, the ultraviolet protection factor (UPF) is commonly used. UV-protective fabric can be generated by application of a chemical finish using normal wet-processing methodologies. However, traditional wet-processing techniques are known to consume large quantities of water and energy and may lead to adverse alterations of the bulk properties of the substrate. Recently, usage of plasmas to generate physicochemical surface modifications of textile substrates has become an intriguing approach to replace or enhance conventional wet-processing techniques. In this research work the effect of plasma treatment on UV protection properties of fabrics was investigated. DC magnetron sputtering was used and the parameters of plasma such as gas type, electrodes, time of exposure, power and, etc. were studied. The morphological and chemical properties of samples were analyzed using Scanning Electron Microscope (SEM) and Furrier Transform Infrared Spectroscopy (FTIR), respectively. The transmittance and UPF values of the original and plasma-treated samples were measured using a Shimadzu UV3101 PC (UV–Vis–NIR scanning spectrophotometer, 190–2, 100 nm range). It was concluded that, plasma which is an echo-friendly, cost effective and dry technique is being used in different branches of the industries, and will conquer textile industry in the near future. Also it is promising method for preparation of UV protection textile.

Keywords: fabric, plasma, textile, UV protection

Procedia PDF Downloads 505
5602 Dynamic Modeling of the Exchange Rate in Tunisia: Theoretical and Empirical Study

Authors: Chokri Slim

Abstract:

The relative failure of simultaneous equation models in the seventies has led researchers to turn to other approaches that take into account the dynamics of economic and financial systems. In this paper, we use an approach based on vector autoregressive model that is widely used in recent years. Their popularity is due to their flexible nature and ease of use to produce models with useful descriptive characteristics. It is also easy to use them to test economic hypotheses. The standard econometric techniques assume that the series studied are stable over time (stationary hypothesis). Most economic series do not verify this hypothesis, which assumes, when one wishes to study the relationships that bind them to implement specific techniques. This is cointegration which characterizes non-stationary series (integrated) with a linear combination is stationary, will also be presented in this paper. Since the work of Johansen, this approach is generally presented as part of a multivariate analysis and to specify long-term stable relationships while at the same time analyzing the short-term dynamics of the variables considered. In the empirical part, we have applied these concepts to study the dynamics of of the exchange rate in Tunisia, which is one of the most important economic policy of a country open to the outside. According to the results of the empirical study by the cointegration method, there is a cointegration relationship between the exchange rate and its determinants. This relationship shows that the variables have a significant influence in determining the exchange rate in Tunisia.

Keywords: stationarity, cointegration, dynamic models, causality, VECM models

Procedia PDF Downloads 341
5601 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning

Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker

Abstract:

Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.

Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning

Procedia PDF Downloads 137
5600 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index

Authors: A. Sathiya Susuman, Hamisi F. Hamisi

Abstract:

Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.

Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index

Procedia PDF Downloads 458
5599 Energy Conservation in Heat Exchangers

Authors: Nadia Allouache

Abstract:

Energy conservation is one of the major concerns in the modern high tech era due to the limited amount of energy resources and the increasing cost of energy. Predicting an efficient use of energy in thermal systems like heat exchangers can only be achieved if the second law of thermodynamics is accounted for. The performance of heat exchangers can be substantially improved by many passive heat transfer augmentation techniques. These letters permit to improve heat transfer rate and to increase exchange surface, but on the other side, they also increase the friction factor associated with the flow. This raises the question of how to employ these passive techniques in order to minimize the useful energy. The objective of this present study is to use a porous substrate attached to the walls as a passive enhancement technique in heat exchangers and to find the compromise between the hydrodynamic and thermal performances under turbulent flow conditions, by using a second law approach. A modified k- ε model is used to simulating the turbulent flow in the porous medium and the turbulent shear flow is accounted for in the entropy generation equation. A numerical modeling, based on the finite volume method is employed for discretizing the governing equations. Effects of several parameters are investigated such as the porous substrate properties and the flow conditions. Results show that under certain conditions of the porous layer thickness, its permeability, and its effective thermal conductivity the minimum rate of entropy production is obtained.

Keywords: second law approach, annular heat exchanger, turbulent flow, porous medium, modified model, numerical analysis

Procedia PDF Downloads 267
5598 Development of a New Characterization Method to Analyse Cypermethrin Penetration in Wood Material by Immunolabelling

Authors: Sandra Tapin-Lingua, Katia Ruel, Jean-Paul Joseleau, Daouia Messaoudi, Olivier Fahy, Michel Petit-Conil

Abstract:

The preservative efficacy of organic biocides is strongly related to their capacity of penetration and retention within wood tissues. The specific detection of the pyrethroid insecticide is currently obtained after extraction followed by chemical analysis by chromatography techniques. However visualizing the insecticide molecule within the wood structure requires specific probes together with microscopy techniques. Therefore, the aim of the present work was to apply a new methodology based on antibody-antigen recognition and electronic microscopy to visualize directly pyrethroids in the wood material. A polyclonal antibody directed against cypermethrin was developed and implement it on Pinus sylvestris wood samples coated with technical cypermethrin. The antibody was tested on impregnated wood and the specific recognition of the insecticide was visualized in transmission electron microscopy (TEM). The immunogold-TEM assay evidenced the capacity of the synthetic biocide to penetrate in the wood. The depth of penetration was measured on sections taken at increasing distances from the coated surface of the wood. Such results correlated with chemical analyzes carried out by GC-ECD after extraction. In addition, the immuno-TEM investigation allowed visualizing, for the first time at the ultrastructure scale of resolution, that cypermethrin was able to diffuse within the secondary wood cell walls.

Keywords: cypermethrin, insecticide, wood penetration, wood retention, immuno-transmission electron microscopy, polyclonal antibody

Procedia PDF Downloads 395
5597 Redefining Urban Landfills – Transformation of a Sanitary Landfill in Indian Cities

Authors: N. L. Divya Gayatri

Abstract:

In India, over 377 million urban people generate 62 million tons of municipal solid waste per annum. Forty-three million tons are collected, 11.9 million are treated and 31 million tons is dumped in landfill sites. The study aims to have an overall understanding of the working and functioning of a sanitary landfill from the siting to the closure stage and identifying various landscape design techniques that can be implemented in a landfill site and come up with a set of guidelines by analyzing the existing policies and guidelines pertaining to landfills. Constituents of municipal solid waste, methods of landfilling, issues, impacts, Mitigation strategies, Landscape design strategies, design approaches towards a landfill, infrastructure requirements, end-use opportunities have been discussed. The objective is to study the ecological and environmental degradation prevention methods, compare various techniques in remediation, study issues in landfill sites in India, analyze scope and opportunities and explore various landscape design strategies. The understanding of the function of landfills with respect to Municipal solid waste and landscaping is conveyed through this study. The study is limited to Landscape design factors in landfill design guidelines and policies mentioned with regard to the issues and impacts specific to the Indian context.

Keywords: sanitary landfill landscaping, environmental impact, municipal solid waste, guidelines, landscape design strategies, landscape design approaches

Procedia PDF Downloads 142
5596 Quantifying the Methods of Monitoring Timers in Electric Water Heater for Grid Balancing on Demand-Side Management: A Systematic Mapping Review

Authors: Yamamah Abdulrazaq, Lahieb A. Abrahim, Samuel E. Davies, Iain Shewring

Abstract:

An electric water heater (EWH) is a powerful appliance that uses electricity in residential, commercial, and industrial settings, and the ability to control them properly will result in cost savings and the prevention of blackouts on the national grid. This article discusses the usage of timers in EWH control strategies for demand-side management (DSM). Up to the authors' knowledge, there is no systematic mapping review focusing on the utilisation of EWH control strategies in DSM has yet been conducted. Consequently, the purpose of this research is to identify and examine main papers exploring EWH procedures in DSM by quantifying and categorising information with regard to publication year and source, kind of methods, and source of data for monitoring control techniques. In order to answer the research questions, a total of 31 publications published between 1999 and 2023 were selected depending on specific inclusion and exclusion criteria. The data indicate that direct load control (DLC) has been somewhat more prevalent than indirect load control (ILC). Additionally, the mixing method is much lower than the other techniques, and the proportion of Real-time data (RTD) to non-real-time data (NRTD) is about equal.

Keywords: demand side management, direct load control, electric water heater, indirect load control, non real-time data, real-time data

Procedia PDF Downloads 65
5595 Nondestructive Evaluation of Hidden Delamination in Glass Fiber Composite Using Terahertz Spectroscopy

Authors: Chung-Hyeon Ryu, Do-Hyoung Kim, Hak-Sung Kim

Abstract:

As the use of the composites was increased, the detecting method of hidden damages which have an effect on performance of the composite was important. Terahertz (THz) spectroscopy was assessed as one of the new powerful nondestructive evaluation (NDE) techniques for fiber reinforced composite structures because it has many advantages which can overcome the limitations of conventional NDE techniques such as x-rays or ultrasound. The THz wave offers noninvasive, noncontact and nonionizing methods evaluating composite damages, also it gives a broad range of information about the material properties. In additions, it enables to detect the multiple-delaminations of various nonmetallic materials. In this study, the pulse type THz spectroscopy imaging system was devised and used for detecting and evaluating the hidden delamination in the glass fiber reinforced plastic (GFRP) composite laminates. The interaction between THz and the GFRP composite was analyzed respect to the type of delamination, including their thickness, size and numbers of overlaps among multiple-delaminations in through-thickness direction. Both of transmission and reflection configurations were used for evaluation of hidden delaminations and THz wave propagations through the delaminations were also discussed. From these results, various hidden delaminations inside of the GFRP composite were successfully detected using time-domain THz spectroscopy imaging system and also compared to the results of C-scan inspection. It is expected that THz NDE technique will be widely used to evaluate the reliability of composite structures.

Keywords: terahertz, delamination, glass fiber reinforced plastic composites, terahertz spectroscopy

Procedia PDF Downloads 574
5594 Torsional Behavior of Reinforced Concrete (RC) Beams Strengthened by Fiber Reinforced Cementitious Materials– a Review

Authors: Sifatullah Bahij, Safiullah Omary, Francoise Feugeas, Amanullah Faqiri

Abstract:

Reinforced concrete (RC) is commonly used material in the construction sector, due to its low-cost and durability, and allowed the architectures and designers to construct structural members with different shapes and finishing. Usually, RC members are designed to sustain service loads efficiently without any destruction. However, because of the faults in the design phase, overloading, materials deficiencies, and environmental effects, most of the structural elements will require maintenance and repairing over their lifetime. Therefore, strengthening and repair of the deteriorated and/or existing RC structures are much important to extend their life cycle. Various techniques are existing to retrofit and strengthen RC structural elements such as steel plate bonding, external pre-stressing, section enlargement, fiber reinforced polymer (FRP) wrapping, etc. Although these configurations can successfully improve the load bearing capacity of the beams, they are still prone to corrosion damage which results in failure of the strengthened elements. Therefore, many researchers used fiber reinforced cementitious materials due to its low-cost, corrosion resistance, and result in improvement of the tensile and fatigue behaviors. Various types of cementitious materials have been used to strengthen or repair structural elements. This paper has summarized to accumulate data regarding on previously published research papers concerning the torsional behaviors of RC beams strengthened by various types of cementitious materials.

Keywords: reinforced concrete beams, strengthening techniques, cementitious materials, torsional strength, twisting angle

Procedia PDF Downloads 106
5593 Fabrication of Textile-Based Radio Frequency Metasurfaces

Authors: Adria Kajenski, Guinevere Strack, Edward Kingsley, Shahriar Khushrushahi, Alkim Akyurtlu

Abstract:

Radio Frequency (RF) metasurfaces are arrangements of subwavelength elements interacting with electromagnetic radiation. These arrangements affect polarization state, amplitude, and phase of impinged radio waves; for example, metasurface designs are used to produce functional passband and stopband filters. Recent advances in additive manufacturing techniques have enabled the low-cost, rapid fabrication of ultra-thin metasurface elements on flexible substrates such as plastic films, paper, and textiles. Furthermore, scalable manufacturing processes promote the integration of fabric-based RF metasurfaces into the market of sensors and devices within the Internet of Things (IoT). The design and fabrication of metasurfaces on textiles require a multidisciplinary team with expertise in i) textile and materials science, ii) metasurface design and simulation, and iii) metasurface fabrication and testing. In this presentation, we will discuss RF metasurfaces on fabric with an emphasis on how the materials, including fabric and inks, along with fabrication techniques, affect the RF performance. We printed metasurfaces using a direct-write approach onto various woven and non-woven fabrics, as well as on fabrics coated with either thermoplastic or thermoset coatings. Our team also performed a range of tests on the printed structures, including different inks and their curing parameters, wash durability, abrasion resistance, and RF performance over time.

Keywords: electronic textiles, metasurface, printed electronics, flexible

Procedia PDF Downloads 179
5592 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 59
5591 Isolation, Preparation and Biological Properties of Soybean-Flaxseed Protein Co-Precipitates

Authors: Muhammad H. Alu’datt, Inteaz Alli

Abstract:

This study was conducted to prepare and evaluate the biological properties of protein co-precipitates from flaxseed and soybean. Protein was prepared by NaOH extraction through the mixing of soybean flour (Sf) and flaxseed flour (Ff) or mixtures of soybean extract (Se) and flaxseed extract (Fe). The protein co-precipitates were precipitated by isoelectric (IEP) and isoelectric-heating (IEPH) co-precipitation techniques. Effects of extraction and co-precipitation techniques on co-precipitate yield were investigated. Native-PAGE, SDS-PAGE were used to study the molecular characterization. Content and antioxidant activity of extracted free and bound phenolic compounds were evaluated for protein co-precipitates. Removal of free and bound phenolic compounds from protein co-precipitates showed little effects on the electrophoretic behavior of the proteins or the protein subunits of protein co-precipitates. Results showed that he highest protein contents and yield were obtained in for Sf-Ff/IEP co-precipitate with values of 53.28 and 25.58% respectively as compared to protein isolates and other co-precipitates. Results revealed that the Sf-Ff/IEP showed a higher content of bound phenolic compounds (53.49% from total phenolic content) as compared to free phenolic compounds (46.51% from total phenolic content). Antioxidant activities of extracted bound phenolic compounds with and without heat treatment from Sf-Ff/IEHP were higher as compared to free phenolic compounds extracted from other protein co-precipitates (29.68 and 22.84%, respectively).

Keywords: antioxidant, phenol, protein co-precipitate, yield

Procedia PDF Downloads 212
5590 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 76
5589 Electrodeposition and Selenization of Cuin Alloys for the Synthesis of Photoactive Cu2in1-X Gax Se2 (Cigs) Thin Films

Authors: Mohamed Benaicha, Mahdi Allam

Abstract:

A new two stage electrochemical process as a safe, large area and low processing cost technique for the production of semi-conducting CuInSe2 (CIS) thin films is studied. CuIn precursors were first potentiostatically electrodeposited onto molybdenum substrates from an acidic thiocyanate electrolyte. In a second stage, the prepared metallic CuIn layers were used as substrate in the selenium electrochemical deposition system and subjected to a thermal treatment in vacuum atmosphere, to eliminate binary phase formation by reaction of the Cu2-x Se and InxSey selenides, leading to the formation of CuInSe2 thin film. Electrochemical selenization from aqueous electrolyte is introduced as an alternative to toxic and hazardous H2Se or Se vapor phase selenization used in physical techniques. In this study, the influence of film deposition parameters such as bath composition, temperature and potential on film properties was studied. The electrochemical, morphological, structural and compositional properties of electrodeposited thin films were characterized using various techniques. Results of Cyclic and Stripping-Cyclic Voltammetry (CV, SCV), Scanning Electron Microscopy (SEM) and Energy Dispersive X-Ray microanalysis (EDX) investigations revealed good reproducibility and homogeneity of the film composition. Thereby optimal technological parameters for the electrochemical production of CuIn, Se as precursors for CuInSe2 thin layers are determined.

Keywords: photovoltaic, CIGS, copper alloys, electrodeposition, thin films

Procedia PDF Downloads 445
5588 Optimization of Hot Metal Charging Circuit in a Steel Melting Shop Using Industrial Engineering Techniques for Achieving Manufacturing Excellence

Authors: N. Singh, A. Khullar, R. Shrivastava, I. Singh, A. S. Kumar

Abstract:

Steel forms the basis of any modern society and is essential to economic growth. India’s annual crude steel production has seen a consistent increase over the past years and is poised to grow to 300 million tons per annum by 2030-31 from current level of 110-120 million tons per annum. Steel industry is highly capital-intensive industry and to remain competitive, it is imperative that it invests in operational excellence. Due to inherent nature of the industry, there is large amount of variability in its supply chain both internally and externally. Production and productivity of a steel plant is greatly affected by the bottlenecks present in material flow logistics. The internal logistics constituting of transport of liquid metal within a steel melting shop (SMS) presents an opportunity in increasing the throughput with marginal capital investment. The study was carried out at one of the SMS of an integrated steel plant located in the eastern part of India. The plant has three SMS’s and the study was carried out at one of them. The objective of this study was to identify means to optimize SMS hot metal logistics through application of industrial engineering techniques. The study also covered the identification of non-value-added activities and proposed methods to eliminate the delays and improve the throughput of the SMS.

Keywords: optimization, steel making, supply chain, throughput enhancement, workforce productivity

Procedia PDF Downloads 101
5587 Reverse Engineering of a Secondary Structure of a Helicopter: A Study Case

Authors: Jose Daniel Giraldo Arias, Camilo Rojas Gomez, David Villegas Delgado, Gullermo Idarraga Alarcon, Juan Meza Meza

Abstract:

The reverse engineering processes are widely used in the industry with the main goal to determine the materials and the manufacture used to produce a component. There are a lot of characterization techniques and computational tools that are used in order to get this information. A study case of a reverse engineering applied to a secondary sandwich- hybrid type structure used in a helicopter is presented. The methodology used consists of five main steps, which can be applied to any other similar component: Collect information about the service conditions of the part, disassembly and dimensional characterization, functional characterization, material properties characterization and manufacturing processes characterization, allowing to obtain all the supports of the traceability of the materials and processes of the aeronautical products that ensure their airworthiness. A detailed explanation of each step is covered. Criticality and comprehend the functionalities of each part, information of the state of the art and information obtained from interviews with the technical groups of the helicopter’s operators were analyzed,3D optical scanning technique, standard and advanced materials characterization techniques and finite element simulation allow to obtain all the characteristics of the materials used in the manufacture of the component. It was found that most of the materials are quite common in the aeronautical industry, including Kevlar, carbon, and glass fibers, aluminum honeycomb core, epoxy resin and epoxy adhesive. The stacking sequence and volumetric fiber fraction are a critical issue for the mechanical behavior; a digestion acid method was used for this purpose. This also helps in the determination of the manufacture technique which for this case was Vacuum Bagging. Samples of the material were manufactured and submitted to mechanical and environmental tests. These results were compared with those obtained during reverse engineering, which allows concluding that the materials and manufacture were correctly determined. Tooling for the manufacture was designed and manufactured according to the geometry and manufacture process requisites. The part was manufactured and the mechanical, and environmental tests required were also performed. Finally, a geometric characterization and non-destructive techniques allow verifying the quality of the part.

Keywords: reverse engineering, sandwich-structured composite parts, helicopter, mechanical properties, prototype

Procedia PDF Downloads 394