Search results for: input processing
4692 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops
Authors: Catalina Albornoz, Giacomo Barbieri
Abstract:
Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature
Procedia PDF Downloads 3864691 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 1554690 An Energy-Efficient Model of Integrating Telehealth IoT Devices with Fog and Cloud Computing-Based Platform
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The rapid growth of telehealth Internet of Things (IoT) devices has raised concerns about energy consumption and efficient data processing. This paper introduces an energy-efficient model that integrates telehealth IoT devices with a fog and cloud computing-based platform, offering a sustainable and robust solution to overcome these challenges. Our model employs fog computing as a localized data processing layer while leveraging cloud computing for resource-intensive tasks, significantly reducing energy consumption. We incorporate adaptive energy-saving strategies. Simulation analysis validates our approach's effectiveness in enhancing energy efficiency for telehealth IoT systems integrated with localized fog nodes and both private and public cloud infrastructures. Future research will focus on further optimization of the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability in other healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 864689 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Authors: Songul Cinaroglu
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.Keywords: public hospital unions, efficiency, data envelopment analysis, random forest
Procedia PDF Downloads 1264688 Lexical-Semantic Processing by Chinese as a Second Language Learners
Authors: Yi-Hsiu Lai
Abstract:
The present study aimed to elucidate the lexical-semantic processing for Chinese as second language (CSL) learners. Twenty L1 speakers of Chinese and twenty CSL learners in Taiwan participated in a picture naming task and a category fluency task. Based on their Chinese proficiency levels, these CSL learners were further divided into two sub-groups: ten CSL learners of elementary Chinese proficiency level and ten CSL learners of intermediate Chinese proficiency level. Instruments for the naming task were sixty black-and-white pictures: thirty-five object pictures and twenty-five action pictures. Object pictures were divided into two categories: living objects and non-living objects. Action pictures were composed of two categories: action verbs and process verbs. As in the naming task, the category fluency task consisted of two semantic categories – objects (i.e., living and non-living objects) and actions (i.e., action and process verbs). Participants were asked to report as many items within a category as possible in one minute. Oral productions were tape-recorded and transcribed for further analysis. Both error types and error frequency were calculated. Statistical analysis was further conducted to examine these error types and frequency made by CSL learners. Additionally, category effects, pictorial effects and L2 proficiency were discussed. Findings in the present study helped characterize the lexical-semantic process of Chinese naming in CSL learners of different Chinese proficiency levels and made contributions to Chinese vocabulary teaching and learning in the future.Keywords: lexical-semantic processing, Mandarin Chinese, naming, category effects
Procedia PDF Downloads 4604687 Composite Kernels for Public Emotion Recognition from Twitter
Authors: Chien-Hung Chen, Yan-Chun Hsing, Yung-Chun Chang
Abstract:
The Internet has grown into a powerful medium for information dispersion and social interaction that leads to a rapid growth of social media which allows users to easily post their emotions and perspectives regarding certain topics online. Our research aims at using natural language processing and text mining techniques to explore the public emotions expressed on Twitter by analyzing the sentiment behind tweets. In this paper, we propose a composite kernel method that integrates tree kernel with the linear kernel to simultaneously exploit both the tree representation and the distributed emotion keyword representation to analyze the syntactic and content information in tweets. The experiment results demonstrate that our method can effectively detect public emotion of tweets while outperforming the other compared methods.Keywords: emotion recognition, natural language processing, composite kernel, sentiment analysis, text mining
Procedia PDF Downloads 2164686 Risk-Based Regulation as a Model of Control in the South African Meat Industry
Authors: R. Govender, T. C. Katsande, E. Madoroba, N. M. Thiebaut, D. Naidoo
Abstract:
South African control over meat safety is managed by the Department of Agriculture, Forestry and Fisheries (DAFF). Veterinary services department in each of the nine provinces in the country is tasked with overseeing the farm and abattoir segments of the meat supply chain. Abattoirs are privately owned. The number of abattoirs over the years has increased. This increase has placed constraints on government resources required to monitor these abattoirs. This paper presents empirical research results on the hygienic processing of meat in high and low throughout abattoirs. This paper presents a case for the adoption of risk-based regulation as a method of government control over hygiene and safe meat processing at abattoirs in South Africa. Recommendations are made to the DAFF regarding policy considerations on risk-based regulation as a model of control in South Africa.Keywords: risk-based regulation, abattoir, food control, meat safety
Procedia PDF Downloads 3114685 Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network
Authors: Hui Wei, Zheng Dong
Abstract:
Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model.Keywords: biological model, feature extraction, multi-layer neural network, object recognition
Procedia PDF Downloads 5404684 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2134683 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 2474682 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images
Authors: Sophia Shi
Abstract:
Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG
Procedia PDF Downloads 1314681 A Case Study on the Seismic Performance Assessment of the High-Rise Setback Tower Under Multiple Support Excitations on the Basis of TBI Guidelines
Authors: Kamyar Kildashti, Rasoul Mirghaderi
Abstract:
This paper describes the three-dimensional seismic performance assessment of a high-rise steel moment-frame setback tower, designed and detailed per the 2010 ASCE7, under multiple support excitations. The vulnerability analyses are conducted based on nonlinear history analyses under a set of multi-directional strong ground motion records which are scaled to design-based site-specific spectrum in accordance with ASCE41-13. Spatial variation of input motions between far distant supports of each part of the tower is considered by defining time lag. Plastic hinge monotonic and cyclic behavior for prequalified steel connections, panel zones, as well as steel columns is obtained from predefined values presented in TBI Guidelines, PEER/ATC72 and FEMA P440A to include stiffness and strength degradation. Inter-story drift ratios, residual drift ratios, as well as plastic hinge rotation demands under multiple support excitations, are compared to those obtained from uniform support excitations. Performance objectives based on acceptance criteria declared by TBI Guidelines are compared between uniform and multiple support excitations. The results demonstrate that input motion discrepancy results in detrimental effects on the local and global response of the tower.Keywords: high-rise building, nonlinear time history analysis, multiple support excitation, performance-based design
Procedia PDF Downloads 2844680 A Comparative Analysis of Hyper-Parameters Using Neural Networks for E-Mail Spam Detection
Authors: Syed Mahbubuz Zaman, A. B. M. Abrar Haque, Mehedi Hassan Nayeem, Misbah Uddin Sagor
Abstract:
Everyday e-mails are being used by millions of people as an effective form of communication over the Internet. Although e-mails allow high-speed communication, there is a constant threat known as spam. Spam e-mail is often called junk e-mails which are unsolicited and sent in bulk. These unsolicited emails cause security concerns among internet users because they are being exposed to inappropriate content. There is no guaranteed way to stop spammers who use static filters as they are bypassed very easily. In this paper, a smart system is proposed that will be using neural networks to approach spam in a different way, and meanwhile, this will also detect the most relevant features that will help to design the spam filter. Also, a comparison of different parameters for different neural network models has been shown to determine which model works best within suitable parameters.Keywords: long short-term memory, bidirectional long short-term memory, gated recurrent unit, natural language processing, natural language processing
Procedia PDF Downloads 2044679 Assessment of the Soils Pollution Level of the Open Mine and Tailing Dump of Surrounding Territories of Akhtala Ore Processing Combine by Heavy Metals
Authors: K. A. Ghazaryan, T. H. Derdzyan
Abstract:
For assessment of the soils pollution level of the open mine and tailing dump of surrounding territories of Akhtala ore processing combine by heavy metals in 2013 collected soil samples and analyzed for different heavy metals, such as Cu, Zn, Pb, Ni and Cd. The main soil type in the study sites was the mountain cambisol. To classify soil pollution level contamination indices like Contamination factors (Cf), Degree of contamination (Cd), Pollution load index (PLI) and Geoaccumulation index (I-geo) are calculated. The distribution pattern of trace metals in the soil profile according to I geo, Cf and Cd values shows that the soil is very polluted. And also the PLI values for the 19 sites were >1, which indicates deterioration of site quality.Keywords: soils pollution, heavy metal, geoaccumulation index, pollution load index, contamination factor
Procedia PDF Downloads 4324678 Simulations of High-Intensity, Thermionic Electron Guns for Electron Beam Thermal Processing Including Effects of Space Charge Compensation
Authors: O. Hinrichs, H. Franz, G. Reiter
Abstract:
Electron guns have a key function in a series of thermal processes, like EB (electron beam) melting, evaporation or welding. These techniques need a high-intensity continuous electron beam that defocuses itself due to high space charge forces. A proper beam transport throughout the magnetic focusing system can be ensured by a space charge compensation via residual gas ions. The different pressure stages in the EB gun cause various degrees of compensation. A numerical model was installed to simulate realistic charge distributions within the beam by using CST-Particle Studio code. We will present current status of beam dynamic simulations. This contribution will focus on the creation of space charge ions and their influence on beam and gun components. Furthermore, the beam transport in the gun will be shown for different beam parameters. The electron source allows to produce beams with currents of 3 A to 15 A and energies of 40 keV to 45 keV.Keywords: beam dynamic simulation, space charge compensation, thermionic electron source, EB melting, EB thermal processing
Procedia PDF Downloads 3334677 Grounding Chinese Language Vocabulary Teaching and Assessment in the Working Memory Research
Authors: Chan Kwong Tung
Abstract:
Since Baddeley and Hitch’s seminal research in 1974 on working memory (WM), this topic has been of great interest to language educators. Although there are some variations in the definitions of WM, recent findings in WM have contributed vastly to our understanding of language learning, especially its effects on second language acquisition (SLA). For example, the phonological component of WM (PWM) and the executive component of WM (EWM) have been found to be positively correlated with language learning. This paper discusses two general, yet highly relevant WM findings that could directly affect the effectiveness of Chinese Language (CL) vocabulary teaching and learning, as well as the quality of its assessment. First, PWM is found to be critical for the long-term learning of phonological forms of new words. Second, EWM is heavily involved in interpreting the semantic characteristics of new words, which consequently affects the quality of learners’ reading comprehension. These two ideas are hardly discussed in the Chinese literature, both conceptual and empirical. While past vocabulary acquisition studies have mainly focused on the cognitive-processing approach, active processing, ‘elaborate processing’ (or lexical elaboration) and other effective learning tasks and strategies, it is high time to balance the spotlight to the WM (particularly PWM and EWM) to ensure an optimum control on the teaching and learning effectiveness of such approaches, as well as the validity of this language assessment. Given the unique phonological, orthographical and morphological properties of the CL, this discussion will shed some light on the vocabulary acquisition of this Sino-Tibetan language family member. Together, these two WM concepts could have crucial implications for the design, development, and planning of vocabularies and ultimately reading comprehension teaching and assessment in language education. Hopefully, this will raise an awareness and trigger a dialogue about the meaning of these findings for future language teaching, learning, and assessment.Keywords: Chinese Language, working memory, vocabulary assessment, vocabulary teaching
Procedia PDF Downloads 3424676 Automatic Diagnosis of Electrical Equipment Using Infrared Thermography
Authors: Y. Laib Dit Leksir, S. Bouhouche
Abstract:
Analysis and processing of data bases resulting from infrared thermal measurements made on the electrical installation requires the development of new tools in order to obtain correct and additional information to the visual inspections. Consequently, the methods based on the capture of infrared digital images show a great potential and are employed increasingly in various fields. Although, there is an enormous need for the development of effective techniques to analyse these data base in order to extract relevant information relating to the state of the equipments. Our goal consists in introducing recent techniques of modeling based on new methods, image and signal processing to develop mathematical models in this field. The aim of this work is to capture the anomalies existing in electrical equipments during an inspection of some machines using A40 Flir camera. After, we use binarisation techniques in order to select the region of interest and we make comparison between these methods of thermal images obtained to choose the best one.Keywords: infrared thermography, defect detection, troubleshooting, electrical equipment
Procedia PDF Downloads 4754675 Aggregating Buyers and Sellers for E-Commerce: How Demand and Supply Meet in Fairs
Authors: Pierluigi Gallo, Francesco Randazzo, Ignazio Gallo
Abstract:
In recent years, many new and interesting models of successful online business have been developed. Many of these are based on the competition between users, such as online auctions, where the product price is not fixed and tends to rise. Other models, including group-buying, are based on cooperation between users, characterized by a dynamic price of the product that tends to go down. There is not yet a business model in which both sellers and buyers are grouped in order to negotiate on a specific product or service. The present study investigates a new extension of the group-buying model, called fair, which allows aggregation of demand and supply for price optimization, in a cooperative manner. Additionally, our system also aggregates products and destinations for shipping optimization. We introduced the following new relevant input parameters in order to implement a double-side aggregation: (a) price-quantity curves provided by the seller; (b) waiting time, that is, the longer buyers wait, the greater discount they get; (c) payment time, which determines if the buyer pays before, during or after receiving the product; (d) the distance between the place where products are available and the place of shipment, provided in advance by the buyer or dynamically suggested by the system. To analyze the proposed model we implemented a system prototype and a simulator that allows studying effects of changing some input parameters. We analyzed the dynamic price model in fairs having one single seller and a combination of selected sellers. The results are very encouraging and motivate further investigation on this topic.Keywords: auction, aggregation, fair, group buying, social buying
Procedia PDF Downloads 2934674 Assessing the Benefits of Super Depo Sutorejo as a Model of integration of Waste Pickers in a Sustainable City Waste Management
Authors: Yohanes Kambaru Windi, Loetfia Dwi Rahariyani, Dyah Wijayanti, Eko Rustamaji
Abstract:
Surabaya, the second largest city in Indonesia, has been struggling for years with waste production and its management. Nearly 11,000 tons of waste are generated daily by domestic, commercial and industrial areas. It is predicted that approximately 1,300 tons of waste overflew the Benowo Landfill daily in 2013 and projected that the landfill operation will be critical in 2015. The Super Depo Sutorejo (SDS) is a pilot project on waste management launched by the government of Surabaya in March 2013. The project is aimed to reduce the amount of waste dumped in landfill by sorting the recyclable and organic waste for composting by employing waste pickers to sort the waste before transported to landfill. This study is intended to assess the capacity of SDS to process and reduce waste and its complementary benefits. It also overviews the benefits of the project to the waste pickers in term of satisfaction to the job. Waste processing data-sheets were used to assess the difference between input and outputs waste. A survey was distributed to 30 waste pickers and interviews were conducted as a further insight on a particular issue. The analysis showed that SDS enable to reduce waste up to 50% before dumped in the final disposal area. The cost-benefits analysis using cost differential calculation revealed the economic benefit is considerable low, but composting may substitute tangible benefits for maintain the city’s parks. Waste pickers are mostly satisfied with their job (i.e. Salary, health coverage, job security), services and facilities available in SDS and enjoyed rewarding social life within the project. It is concluded that SDS is an effective and efficient model for sustainable waste management and reliable to be developed in developing countries. It is a strategic approach to empower and open up working opportunity for the poor urban community and prolong the operation of landfills.Keywords: cost-benefits, integration, satisfaction, waste management
Procedia PDF Downloads 4734673 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images
Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez
Abstract:
Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking
Procedia PDF Downloads 1054672 A Background Subtraction Based Moving Object Detection Around the Host Vehicle
Authors: Hyojin Lim, Cuong Nguyen Khac, Ho-Youl Jung
Abstract:
In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction.Keywords: gaussian mixture model, background subtraction, moving object detection, color space, morphological filtering
Procedia PDF Downloads 6144671 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 1634670 E-Learning Platform for School Kids
Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.
Abstract:
E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.Keywords: math, education games, e-learning platform, artificial intelligence
Procedia PDF Downloads 1544669 Improvement in Properties of Ni-Cr-Mo-V Steel through Process Control
Authors: Arnab Majumdar, Sanjoy Sadhukhan
Abstract:
Although gun barrel steels are an important variety from defense view point, available literatures are very limited. In the present work, an IF grade Ni-Cr-Mo-V high strength low alloy steel is produced in Electric Earth Furnace-ESR Route. Ingot was hot forged to desired dimension with a reduction ratio of 70-75% followed by homogenization, hardening and tempering treatment. Sample chemistry, NMIR, macro and micro structural analyses were done. Mechanical properties which include tensile, impact, and fracture toughness were studied. Ultrasonic testing was done to identify internal flaws. The existing high strength low alloy Ni-Cr-Mo-V steel shows improved properties in modified processing route and heat treatment schedule in comparison to properties noted earlier for manufacturing of gun barrels. The improvement in properties seems to withstand higher explosive loads with the same amount of steel in gun barrel application.Keywords: gun barrel steels, IF grade, chemistry, physical properties, thermal and mechanical processing, mechanical properties, ultrasonic testing
Procedia PDF Downloads 3754668 An Analysis of the Relations between Aggregates’ Shape and Mechanical Properties throughout the Railway Ballast Service Life
Authors: Daianne Fernandes Diogenes
Abstract:
Railway ballast aggregates’ shape properties and size distribution can be directly affected by several factors, such as traffic, fouling, and maintenance processes, which cause breakage and wearing, leading to the fine particles’ accumulation through the ballast layer. This research aims to analyze the influence of traffic, tamping process, and sleepers’ stiffness on aggregates' shape and mechanical properties, by using traditional and digital image processing (DIP) techniques and cyclic tests, like resilient modulus (RM) and permanent deformation (PD). Aggregates were collected in different phases of the railway service life: (i) right after the crushing process; (ii) after construction, for the aggregates positioned below the sleepers and (iii) after 5 years of operation. An increase in the percentage of cubic particles was observed for the materials (ii) and (iii), providing a better interlocking, increasing stiffness and reducing axial deformation after 5 years of service, when compared to the initial conditions.Keywords: digital image processing, mechanical behavior, railway ballast, shape properties
Procedia PDF Downloads 1204667 The Effectiveness of Energy Index Technique in Bearing Condition Monitoring
Authors: Faisal Alshammari, Abdulmajid Addali, Mosab Alrashed, Taihiret Alhashan
Abstract:
The application of acoustic emission techniques is gaining popularity, as it can monitor the condition of gears and bearings and detect early symptoms of a defect in the form of pitting, wear, and flaking of surfaces. Early detection of these defects is essential as it helps to avoid major failures and the associated catastrophic consequences. Signal processing techniques are required for early defect detection – in this article, a time domain technique called the Energy Index (EI) is used. This article presents an investigation into the Energy Index’s effectiveness to detect early-stage defect initiation and deterioration, and compares it with the common r.m.s. index, Kurtosis, and the Kolmogorov-Smirnov statistical test. It is concluded that EI is a more effective technique for monitoring defect initiation and development than other statistical parameters.Keywords: acoustic emission, signal processing, kurtosis, Kolmogorov-Smirnov test
Procedia PDF Downloads 3644666 Digital Transformation and Environmental Disclosure in Industrial Firms: The Moderating Role of the Top Management Team
Authors: Yongxin Chen, Min Zhang
Abstract:
As industrial enterprises are the primary source of national pollution, environmental information disclosure is a crucial way to demonstrate to stakeholders the work they have done in fulfilling their environmental responsibilities and accepting social supervision. In the era of the digital economy, many companies, actively embracing the opportunities that come with digital transformation, have begun to apply digital technology to information collection and disclosure within the enterprise. However, less is known about the relationship between digital transformation and environmental disclosure. This study investigates how enterprise digital transformation affects environmental disclosure in 643 Chinese industrial companies, according to information processing theory. What is intriguing is that the depth (size) and breadth (diversity) of environmental disclosure linearly increase with the rise in the collection, processing, and analytical capabilities in the digital transformation process. However, the volume of data will grow exponentially, leading to a marginal increase in the economic and environmental costs of utilizing, storing, and managing data. In our empirical findings, linearly increasing benefits and marginal costs create a unique inverted U-shaped relationship between the degree of digital transformation and environmental disclosure in the Chinese industrial sector. Besides, based on the upper echelons theory, we also propose that the top management team with high stability and managerial capabilities will invest more effort and expense into improving environmental disclosure quality, lowering the carbon footprint caused by digital technology, maintaining data security etc. In both these contexts, the increasing marginal cost curves would become steeper, weakening the inverted U-shaped slope between DT and ED.Keywords: digital transformation, environmental disclosure, the top management team, information processing theory, upper echelon theory
Procedia PDF Downloads 1414665 Review on Wear Behavior of Magnesium Matrix Composites
Authors: Amandeep Singh, Niraj Bala
Abstract:
In the last decades, light-weight materials such as magnesium matrix composites have become hot topic for material research due to their excellent mechanical and physical properties. However, relatively very less work has been done related to the wear behavior of these composites. Magnesium matrix composites have wide applications in automobile and aerospace sector. In this review, attempt has been done to collect the literature related to wear behavior of magnesium matrix composites fabricated through various processing techniques such as stir casting, powder metallurgy, friction stir processing etc. Effect of different reinforcements, reinforcement content, reinforcement size, wear load, sliding speed and time have been studied by different researchers in detail. Wear mechanism under different experimental condition has been reviewed in detail. The wear resistance of magnesium and its alloys can be enhanced with the addition of different reinforcements. Wear resistance can further be enhanced by increasing the percentage of added reinforcements. Increase in applied load during wear test leads to increase in wear rate of magnesium composites.Keywords: hardness, magnesium matrix composites, reinforcement, wear
Procedia PDF Downloads 3294664 Direct Conversion of Crude Oils into Petrochemicals under High Severity Conditions
Authors: Anaam H. Al-ShaikhAli, Mansour A. Al-Herz
Abstract:
The research leverages the proven HS-FCC technology to directly crack crude oils into petrochemical building blocks. Crude oils were subjected to an optimized hydro-processing process where metal contaminants and sulfur were reduced to an acceptable level for feeding the crudes into the HS-FCC technology. The hydro-processing is achieved through a fixed-bed reactor which is composed of 3 layers of catalysts. The crude oil is passed through a dementalization catalyst followed by a desulfurization catalyst and finally a de-aromatization catalyst. The hydroprocessing was conducted at an optimized liquid hourly space velocity (LHSV), temperature, and pressure for an optimal reduction of metals and sulfur from the crudes. The hydro-processed crudes were then fed into a micro activity testing (MAT) unit to simulate the HS-FCC technology. The catalytic cracking of crude oils was conducted over tailored catalyst formulations under an optimized catalyst/oil ratio and cracking temperature for optimal production of total light olefins.Keywords: petrochemical, catalytic cracking, catalyst synthesis, HS-FCC technology
Procedia PDF Downloads 904663 ZigBee Wireless Sensor Nodes with Hybrid Energy Storage System Based on Li-Ion Battery and Solar Energy Supply
Authors: Chia-Chi Chang, Chuan-Bi Lin, Chia-Min Chan
Abstract:
Most ZigBee sensor networks to date make use of nodes with limited processing, communication, and energy capabilities. Energy consumption is of great importance in wireless sensor applications as their nodes are commonly battery-driven. Once ZigBee nodes are deployed outdoors, limited power may make a sensor network useless before its purpose is complete. At present, there are two strategies for long node and network lifetime. The first strategy is saving energy as much as possible. The energy consumption will be minimized through switching the node from active mode to sleep mode and routing protocol with ultra-low energy consumption. The second strategy is to evaluate the energy consumption of sensor applications as accurately as possible. Erroneous energy model may render a ZigBee sensor network useless before changing batteries. In this paper, we present a ZigBee wireless sensor node with four key modules: a processing and radio unit, an energy harvesting unit, an energy storage unit, and a sensor unit. The processing unit uses CC2530 for controlling the sensor, carrying out routing protocol, and performing wireless communication with other nodes. The harvesting unit uses a 2W solar panel to provide lasting energy for the node. The storage unit consists of a rechargeable 1200 mAh Li-ion battery and a battery charger using a constant-current/constant-voltage algorithm. Our solution to extend node lifetime is implemented. Finally, a long-term sensor network test is used to exhibit the functionality of the solar powered system.Keywords: ZigBee, Li-ion battery, solar panel, CC2530
Procedia PDF Downloads 374