Search results for: multi sensor image fusion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8059

Search results for: multi sensor image fusion

6559 Development of Tools for Multi Vehicles Simulation with Robot Operating System and ArduPilot

Authors: Pierre Kancir, Jean-Philippe Diguet, Marc Sevaux

Abstract:

One of the main difficulties in developing multi-robot systems (MRS) is related to the simulation and testing tools available. Indeed, if the differences between simulations and real robots are too significant, the transition from the simulation to the robot won’t be possible without another long development phase and won’t permit to validate the simulation. Moreover, the testing of different algorithmic solutions or modifications of robots requires a strong knowledge of current tools and a significant development time. Therefore, the availability of tools for MRS, mainly with flying drones, is crucial to enable the industrial emergence of these systems. This research aims to present the most commonly used tools for MRS simulations and their main shortcomings and presents complementary tools to improve the productivity of designers in the development of multi-vehicle solutions focused on a fast learning curve and rapid transition from simulations to real usage. The proposed contributions are based on existing open source tools as Gazebo simulator combined with ROS (Robot Operating System) and the open-source multi-platform autopilot ArduPilot to bring them to a broad audience.

Keywords: ROS, ArduPilot, MRS, simulation, drones, Gazebo

Procedia PDF Downloads 196
6558 Multi-Walled Carbon Nanotube Based Water Filter for Virus Pathogen Removal

Authors: K. Domagala, D. Kata, T. Graule

Abstract:

Diseases caused by contaminated drinking water are the worldwide problem, which leads to the death and severe illnesses for hundreds of millions million people each year. There is an urgent need for efficient water treatment techniques for virus pathogens removal. The aim of the research was to develop safe and economic solution, which help with the water treatment. In this study, the synthesis of copper-based multi-walled carbon nanotube composites is described. Proposed solution utilize combination of a low-cost material with a high active surface area and copper antiviral properties. Removal of viruses from water was possible by adsorption based on electrostatic interactions of negatively charged virus with a positively charged filter material.

Keywords: multi walled carbon nanotubes, water purification, virus removal, water treatment

Procedia PDF Downloads 121
6557 Discursivity and Creativity: Implementing Pigrum's Multi-Mode Transitional Practices in Upper Division Creative Production Courses

Authors: Michael Filimowicz, Veronika Tzankova

Abstract:

This paper discusses the practical implementation of Derek Pigrum’s multi-mode model of transitional practices in the context of upper division production courses in an interaction design curriculum. The notion of teaching creativity directly was connected to a general notion of “discursivity” by which is meant students’ overall ability to discuss, describe, and engage in dialogue about their creative work. We present a study of how Pigrum’s transitional modes can be mapped onto a variety of course activities, and discuss challenges and outcomes of directly engaging student discursivity in their creative output.

Keywords: teaching creativity, multi-mode transitional practices, discursivity, rich dialogue, art and design education, pedagogy

Procedia PDF Downloads 495
6556 Instant Fire Risk Assessment Using Artifical Neural Networks

Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan

Abstract:

Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.

Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index

Procedia PDF Downloads 123
6555 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 237
6554 Evaluation of Cirata Reservoir Sustainability Using Multi Dimensionalscaling (MDS)

Authors: Kholil Kholil, Aniwidayati

Abstract:

MDS (Multi-Dimensional Scaling) is one method that has been widely used to evaluate the use of natural resources. By using Raffish software tool, we will able to analyze sustainability level of the natural resources use. This paper will discuss the level of sustainability of the reservoir using MDS (Multi-Dimensional Scaling) based on five dimensions: (1) Ecology & Layout, (2) Economics, (3) Social & Culture, (4) Regulations & Institutional, and (5) Infrastructure and Technology. MDS analysis results show that the dimension of ecological and layout, institutional and the regulation are lack of sustainability due to the low index score of 45.76 and 42.24. While for the economic, social and culture, and infrastructure and technology dimension reach each score of 63.12, 64.42, and 68.64 (only the sufficient sustainability category). It means that the sustainability performance of Cirata Reservoir seriously threatened.

Keywords: MDS, cirata reservoir, carrying capacity, water quality, sustainable development, sedimentation, sustainability index

Procedia PDF Downloads 371
6553 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 216
6552 The Mediating Role of Bank Image in Customer Satisfaction Building

Authors: H. Emari, Z. Emari

Abstract:

The main objective of this research was to determine the dimensions of service quality in the banking industry of Iran. For this purpose, the study empirically examined the European perspective suggesting that service quality consists of three dimensions, technical, functional and image. This research is an applied research and its strategy is casual strategy. A standard questionnaire was used for collecting the data. 287 customers of Melli Bank of Northwest were selected through cluster sampling and were studied. The results from a banking service sample revealed that the overall service quality is influenced more by a consumer’s perception of technical quality than functional quality. Accordingly, the Gronroos model is a more appropriate representation of service quality than the American perspective with its limited concentration on the dimension of functional quality in the banking industry of Iran. So, knowing the key dimensions of the quality of services in this industry and planning for their improvement can increase the satisfaction of customers and productivity of this industry.

Keywords: technical quality, functional quality, banking, image, mediating role

Procedia PDF Downloads 358
6551 Narrating 1968: Felipe Cazals’ Canoa (1976) and Images of Massacre

Authors: Nancy Elizabeth Naranjo Garcia

Abstract:

Canoa (1976) by Felipe Cazals is a film that exposes the consequences of power that the Mexican State exercised over the 1968 student movement. The film, in this particular way, approaches the Tlatelolco Massacre from a point of view that takes into consideration the events that led up to it. Nonetheless, the reference to the political tension in Canoa remains ambiguous. Thus, the cinematographic representation refers to an event that leaves space for reflection, and as a consequence leaves evidence of an image that signals the notion of survival as Georges Didi-Huberman points out. In addition to denouncing the oppressive force by the Mexican State, the images in Canoa also emphasize what did not happen in Tlatelolco and its condensation with the student activists. To observe the images that Canoa offers in a new light, this work proposes further exploration with the following questions; How do the images in Canoa narrate? How are the images inserted in the film? In this fashion, a more profound comprehension of the objective and the essence of the images becomes feasible. As a result, it is possible to analyze the images of Canoa with the real killing at San Miguel Canoa in literature. The film visualizes a testimony of the event that once seemed unimaginable, an image that anticipates and structures the proceeding event. Therefore, this study takes a second look at how Canoa considers not only the killing at San Miguel Canoa and the Tlatlelolco Massacre, but goes further on contextualize an unimaginable image.

Keywords: cinematographic representation, student movement, Tlatelolco Massacre, unimaginable image

Procedia PDF Downloads 203
6550 Object Oriented Classification Based on Feature Extraction Approach for Change Detection in Coastal Ecosystem across Kochi Region

Authors: Mohit Modi, Rajiv Kumar, Manojraj Saxena, G. Ravi Shankar

Abstract:

Change detection of coastal ecosystem plays a vital role in monitoring and managing natural resources along the coastal regions. The present study mainly focuses on the decadal change in Kochi islands connecting the urban flatland areas and the coastal regions where sand deposits have taken place. With this, in view, the change detection has been monitored in the Kochi area to apprehend the urban growth and industrialization leading to decrease in the wetland ecosystem. The region lies between 76°11'19.134"E to 76°25'42.193"E and 9°52'35.719"N to 10°5'51.575"N in the south-western coast of India. The IRS LISS-IV satellite image has been processed using a rule-based algorithm to classify the LULC and to interpret the changes between 2005 & 2015. The approach takes two steps, i.e. extracting features as a single GIS vector layer using different parametric values and to dissolve them. The multi-resolution segmentation has been carried out on the scale ranging from 10-30. The different classes like aquaculture, agricultural land, built-up, wetlands etc. were extracted using parameters like NDVI, mean layer values, the texture-based feature with corresponding threshold values using a rule set algorithm. The objects obtained in the segmentation process were visualized to be overlaying the satellite image at a scale of 15. This layer was further segmented using the spectral difference segmentation rule between the objects. These individual class layers were dissolved in the basic segmented layer of the image and were interpreted in vector-based GIS programme to achieve higher accuracy. The result shows a rapid increase in an industrial area of 40% based on industrial area statistics of 2005. There is a decrease in wetlands area which has been converted into built-up. New roads have been constructed which are connecting the islands to urban areas as well as highways. The increase in coastal region has been visualized due to sand depositions. The outcome is well supported by quantitative assessments which will empower rich understanding of land use land cover change for appropriate policy intervention and further monitoring.

Keywords: land use land cover, multiresolution segmentation, NDVI, object based classification

Procedia PDF Downloads 175
6549 Supply Chain Network Design for Perishable Products in Developing Countries

Authors: Abhishek Jain, Kavish Kejriwal, V. Balaji Rao, Abhigna Chavda

Abstract:

Increasing environmental and social concerns are forcing companies to take a fresh view of the impact of supply chain operations on environment and society when designing a supply chain. A challenging task in today’s food industry is the distribution of high-quality food items throughout the food supply chain. Improper storage and unwanted transportation are the major hurdles in food supply chain and can be tackled by making dynamic storage facility location decisions with the distribution network. Since food supply chain in India is one of the biggest supply chains in the world, the companies should also consider environmental impact caused by the supply chain. This project proposes a multi-objective optimization model by integrating sustainability in decision-making, on distribution in a food supply chain network (SCN). A Multi-Objective Mixed-Integer Linear Programming (MOMILP) model between overall cost and environmental impact caused by the SCN is formulated for the problem. The goal of MOMILP is to determine the pareto solutions for overall cost and environmental impact caused by the supply chain. This is solved by using GAMS with CPLEX as third party solver. The outcomes of the project are pareto solutions for overall cost and environmental impact, facilities to be operated and the amount to be transferred to each warehouse during the time horizon.

Keywords: multi-objective mixed linear programming, food supply chain network, GAMS, multi-product, multi-period, environment

Procedia PDF Downloads 309
6548 Multi-Response Optimization of CNC Milling Parameters Using Taguchi Based Grey Relational Analysis for AA6061 T6 Aluminium Alloy

Authors: Varsha Singh, Kishan Fuse

Abstract:

This paper presents a study of the grey-Taguchi method to optimize CNC milling parameters of AA6061 T6 aluminium alloy. Grey-Taguchi method combines Taguchi method based design of experiments (DOE) with grey relational analysis (GRA). Multi-response optimization of different quality characteristics as surface roughness, material removal rate, cutting forces is done using grey relational analysis (GRA). The milling parameters considered for experiments include cutting speed, feed per tooth, and depth of cut. Each parameter with three levels is selected. A grey relational grade is used to estimate overall quality characteristics performance. The Taguchi’s L9 orthogonal array is used for design of experiments. MINITAB 17 software is used for optimization. Analysis of variance (ANOVA) is used to identify most influencing parameter. The experimental results show that grey relational analysis is effective method for optimizing multi-response characteristics. Optimum results are finally validated by performing confirmation test.

Keywords: ANOVA, CNC milling, grey relational analysis, multi-response optimization

Procedia PDF Downloads 301
6547 Integrated Approach of Quality Function Deployment, Sensitivity Analysis and Multi-Objective Linear Programming for Business and Supply Chain Programs Selection

Authors: T. T. Tham

Abstract:

The aim of this study is to propose an integrated approach to determine the most suitable programs, based on Quality Function Deployment (QFD), Sensitivity Analysis (SA) and Multi-Objective Linear Programming model (MOLP). Firstly, QFD is used to determine business requirements and transform them into business and supply chain programs. From the QFD, technical scores of all programs are obtained. All programs are then evaluated through five criteria (productivity, quality, cost, technical score, and feasibility). Sets of weight of these criteria are built using Sensitivity Analysis. Multi-Objective Linear Programming model is applied to select suitable programs according to multiple conflicting objectives under a budget constraint. A case study from the Sai Gon-Mien Tay Beer Company is given to illustrate the proposed methodology. The outcome of the study provides a comprehensive picture for companies to select suitable programs to obtain the optimal solution according to their preference.

Keywords: business program, multi-objective linear programming model, quality function deployment, sensitivity analysis, supply chain management

Procedia PDF Downloads 110
6546 Iris Detection on RGB Image for Controlling Side Mirror

Authors: Norzalina Othman, Nurul Na’imy Wan, Azliza Mohd Rusli, Wan Noor Syahirah Meor Idris

Abstract:

Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors.

Keywords: iris detection, midpoint coordinates, RGB images, side mirror

Procedia PDF Downloads 413
6545 The Role of KontraS as Track-6 on Multi Track Diplomacy for Conflict Resolution: Case Study Human Rights Crisis in Myanmar in 2015

Authors: Hardi Alunaza, Mauidhotu Rofiq

Abstract:

This research is attempted to describe the role of KontraS as track-6 on multi track diplomacy for conflict resolution in Myanmar in 2015. The researcher took the specific interest on multi track diplomacy and transnational advocacy concepts to analyze the phenomena. Furthermore, this essay is using the descriptive method with a qualitative approach. The data collection technique is literature study consisting of books, journals, and including data from the reliable website in supporting the explanation of this research. The result of this research is divided into two important points in explaining the role of KontraS in cases of human rights crisis in Myanmar. First, KontraS as human rights NGO in Indonesia was able to advocate against human rights violence that occurred in other countries by encouraging Indonesian Government to take part in the resolution of human rights issues affecting the Rohingya people in Burma. Also, KontraS take advantages of transnational advocacy networks as a form of politics and accountabilities responsibility of Non-Governmental Organization against human rights crisis in other countries.

Keywords: conflict resolution, human rights crisis, multi track diplomacy, transnational advocacy

Procedia PDF Downloads 312
6544 Aspect-Level Sentiment Analysis with Multi-Channel and Graph Convolutional Networks

Authors: Jiajun Wang, Xiaoge Li

Abstract:

The purpose of the aspect-level sentiment analysis task is to identify the sentiment polarity of aspects in a sentence. Currently, most methods mainly focus on using neural networks and attention mechanisms to model the relationship between aspects and context, but they ignore the dependence of words in different ranges in the sentence, resulting in deviation when assigning relationship weight to other words other than aspect words. To solve these problems, we propose a new aspect-level sentiment analysis model that combines a multi-channel convolutional network and graph convolutional network (GCN). Firstly, the context and the degree of association between words are characterized by Long Short-Term Memory (LSTM) and self-attention mechanism. Besides, a multi-channel convolutional network is used to extract the features of words in different ranges. Finally, a convolutional graph network is used to associate the node information of the dependency tree structure. We conduct experiments on four benchmark datasets. The experimental results are compared with those of other models, which shows that our model is better and more effective.

Keywords: aspect-level sentiment analysis, attention, multi-channel convolution network, graph convolution network, dependency tree

Procedia PDF Downloads 199
6543 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 686
6542 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 460
6541 Multi-Scaled Non-Local Means Filter for Medical Images Denoising: Empirical Mode Decomposition vs. Wavelet Transform

Authors: Hana Rabbouch

Abstract:

In recent years, there has been considerable growth of denoising techniques mainly devoted to medical imaging. This important evolution is not only due to the progress of computing techniques, but also to the emergence of multi-resolution analysis (MRA) on both mathematical and algorithmic bases. In this paper, a comparative study is conducted between the two best-known MRA-based decomposition techniques: the Empirical Mode Decomposition (EMD) and the Discrete Wavelet Transform (DWT). The comparison is carried out in a framework of multi-scale denoising, where a Non-Local Means (NLM) filter is performed scale-by-scale to a sample of benchmark medical images. The results prove the effectiveness of the multiscaled denoising, especially when the NLM filtering is coupled with the EMD.

Keywords: medical imaging, non local means, denoising, multiscaled analysis, empirical mode decomposition, wavelets

Procedia PDF Downloads 130
6540 District Selection for Geotechnical Settlement Suitability Using GIS and Multi Criteria Decision Analysis: A Case Study in Denizli, Turkey

Authors: Erdal Akyol, Mutlu Alkan

Abstract:

Multi criteria decision analysis (MDCA) covers both data and experience. It is very common to solve the problems with many parameters and uncertainties. GIS supported solutions improve and speed up the decision process. Weighted grading as a MDCA method is employed for solving the geotechnical problems. In this study, geotechnical parameters namely soil type; SPT (N) blow number, shear wave velocity (Vs) and depth of underground water level (DUWL) have been engaged in MDCA and GIS. In terms of geotechnical aspects, the settlement suitability of the municipal area was analyzed by the method. MDCA results were compatible with the geotechnical observations and experience. The method can be employed in geotechnical oriented microzoning studies if the criteria are well evaluated.

Keywords: GIS, spatial analysis, multi criteria decision analysis, geotechnics

Procedia PDF Downloads 446
6539 Enhancing Healthcare Delivery in Low-Income Markets: An Exploration of Wireless Sensor Network Applications

Authors: Innocent Uzougbo Onwuegbuzie

Abstract:

Healthcare delivery in low-income markets is fraught with numerous challenges, including limited access to essential medical resources, inadequate healthcare infrastructure, and a significant shortage of trained healthcare professionals. These constraints lead to suboptimal health outcomes and a higher incidence of preventable diseases. This paper explores the application of Wireless Sensor Networks (WSNs) as a transformative solution to enhance healthcare delivery in these underserved regions. WSNs, comprising spatially distributed sensor nodes that collect and transmit health-related data, present opportunities to address critical healthcare needs. Leveraging WSN technology facilitates real-time health monitoring and remote diagnostics, enabling continuous patient observation and early detection of medical issues, especially in areas with limited healthcare facilities and professionals. The implementation of WSNs can enhance the overall efficiency of healthcare systems by enabling timely interventions, reducing the strain on healthcare facilities, and optimizing resource allocation. This paper highlights the potential benefits of WSNs in low-income markets, such as cost-effectiveness, increased accessibility, and data-driven decision-making. However, deploying WSNs involves significant challenges, including technical barriers like limited internet connectivity and power supply, alongside concerns about data privacy and security. Moreover, robust infrastructure and adequate training for local healthcare providers are essential for successful implementation. It further examines future directions for WSNs, emphasizing innovation, scalable solutions, and public-private partnerships. By addressing these challenges and harnessing the potential of WSNs, it is possible to revolutionize healthcare delivery and improve health outcomes in low-income markets.

Keywords: wireless sensor networks (WSNs), healthcare delivery, low-Income markets, remote patient monitoring, health data security

Procedia PDF Downloads 16
6538 Fabrication of a New Electrochemical Sensor Based on New Nanostructured Molecularly Imprinted Polypyrrole for Selective and Sensitive Determination of Morphine

Authors: Samaneh Nabavi, Hadi Shirzad, Arash Ghoorchian, Maryam Shanesaz, Reza Naderi

Abstract:

Morphine (MO), the most effective painkiller, is considered the reference by which analgesics are assessed. It is very necessary for the biomedical applications to detect and maintain the MO concentrations in the blood and urine with in safe ranges. To date, there are many expensive techniques for detecting MO. Recently, many electrochemical sensors for direct determination of MO were constructed. The molecularly imprinted polymer (MIP) is a polymeric material, which has a built-in functionality for the recognition of a particular chemical substance with its complementary cavity.This paper reports a sensor for MO using a combination of a molecularly imprinted polymer (MIP) and differential-pulse voltammetry (DPV). Electropolymerization of MO doped polypyrrole yielded poor quality, but a well-doped, nanostructure and increased impregnation has been obtained in the pH=12. Above a pH of 11, MO is in the anionic forms. The effect of various experimental parameters including pH, scan rate and accumulation time on the voltammetric response of MO was investigated. At the optimum conditions, the concentration of MO was determined using DPV in a linear range of 7.07 × 10−6 to 2.1 × 10−4 mol L−1 with a correlation coefficient of 0.999, and a detection limit of 13.3 × 10-8 mol L−1, respectively. The effect of common interferences on the current response of MO namely ascorbic acid (AA) and uric acid (UA) is studied. The modified electrode can be used for the determination of MO spiked into urine samples, and excellent recovery results were obtained. The nanostructured polypyrrole films were characterized by field emission scanning electron microscopy (FESEM) and furrier transforms infrared (FTIR).

Keywords: morphine detection, sensor, polypyrrole, nanostructure, molecularly imprinted polymer

Procedia PDF Downloads 412
6537 A Multigranular Linguistic ARAS Model in Group Decision Making

Authors: Wiem Daoud Ben Amor, Luis Martínez López, Hela Moalla Frikha

Abstract:

Most of the multi-criteria group decision making (MCGDM) problems dealing with qualitative criteria require consideration of the large background of expert information. It is common that experts have different degrees of knowledge for giving their alternative assessments according to criteria. So, it seems logical that they use different evaluation scales to express their judgment, i.e., multi granular linguistic scales. In this context, we propose the extension of the classical additive ratio assessment (ARAS) method to the case of a hierarchical linguistics term for managing multi granular linguistic scales in uncertain contexts where uncertainty is modeled by means in linguistic information. The proposed approach is called the extended hierarchical linguistics-ARAS method (ARAS-ELH). Within the ARAS-ELH approach, the DM can diagnose the results (the ranking of the alternatives) in a decomposed style, i.e., not only at one level of the hierarchy but also at the intermediate ones. Also, the developed approach allows a feedback transformation i.e the collective final results of all experts able to be transformed at any level of the extended linguistic hierarchy that each expert has previously used. Therefore, the ARAS-ELH technique makes it easier for decision-makers to understand the results. Finally, An MCGDM case study is given to illustrate the proposed approach.

Keywords: additive ratio assessment, extended hierarchical linguistic, multi-criteria group decision making problems, multi granular linguistic contexts

Procedia PDF Downloads 198
6536 Fracture Crack Monitoring Using Digital Image Correlation Technique

Authors: B. G. Patel, A. K. Desai, S. G. Shah

Abstract:

The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.

Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect

Procedia PDF Downloads 377
6535 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective

Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli

Abstract:

In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.

Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks

Procedia PDF Downloads 60
6534 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 68
6533 Effective Texture Features for Segmented Mammogram Images Based on Multi-Region of Interest Segmentation Method

Authors: Ramayanam Suresh, A. Nagaraja Rao, B. Eswara Reddy

Abstract:

Texture features of mammogram images are useful for finding masses or cancer cases in mammography, which have been used by radiologists. Textures are greatly succeeded for segmented images rather than normal images. It is necessary to perform segmentation for exclusive specification of cancer and non-cancer regions separately. Region of interest (ROI) is most commonly used technique for mammogram segmentation. Limitation of this method is that it is unable to explore segmentation for large collection of mammogram images. Therefore, this paper is proposed multi-ROI segmentation for addressing the above limitation. It supports greatly in finding the best texture features of mammogram images. Experimental study demonstrates the effectiveness of proposed work using benchmarked images.

Keywords: texture features, region of interest, multi-ROI segmentation, benchmarked images

Procedia PDF Downloads 297
6532 Clothes Identification Using Inception ResNet V2 and MobileNet V2

Authors: Subodh Chandra Shakya, Badal Shrestha, Suni Thapa, Ashutosh Chauhan, Saugat Adhikari

Abstract:

To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system.

Keywords: inception ResNet, convolutional neural net, deep learning, confusion matrix, data augmentation, data preprocessing

Procedia PDF Downloads 173
6531 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 287
6530 A Multi-Agent Simulation of Serious Games to Predict Their Impact on E-Learning Processes

Authors: Ibtissem Daoudi, Raoudha Chebil, Wided Lejouad Chaari

Abstract:

Serious games constitute actually a recent and attractive way supposed to replace the classical boring courses. However, the choice of the adapted serious game to a specific learning environment remains a challenging task that makes teachers unwilling to adopt this concept. To fill this gap, we present, in this paper, a multi-agent-based simulator allowing to predict the impact of a serious game integration in a learning environment given several game and players characteristics. As results, the presented tool gives intensities of several emotional aspects characterizing learners reactions to the serious game adoption. The presented simulator is tested to predict the effect of basing a coding course on the serious game ”CodeCombat”. The obtained results are compared with feedbacks of using the same serious game in a real learning process.

Keywords: emotion, learning process, multi-agent simulation, serious games

Procedia PDF Downloads 390