Search results for: edge detection algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7074

Search results for: edge detection algorithm

4464 DNA-Based Gold Nanoprobe Biosensor to Detect Pork Contaminant

Authors: Rizka Ardhiyana, Liesbetini Haditjaroko, Sri Mulijani, Reki Ashadi Wicaksono, Raafqi Ranasasmita

Abstract:

Designing a sensitive, specific and easy to use method to detect pork contamination in the food industry remains a major challenge. In the current study, we developed a sensitive thiol-bond AuNP-Probe biosensor that will change color when detecting pork DNA in the Cytochrome B region. The interaction between the biosensors and DNA sample is measured by spectrophotometer at 540 nm. The biosensor is made by reducing gold with trisodium citrate to produce gold nanoparticle with 39.05 nm diameter. The AuNP-Probe biosensor (gold nanoprobe) achieved 16.04 ng DNA/µl limit of detection and 53.48 ng DNA/µl limit of quantification. The linearity (R2) between color absorbance changes and DNA concentration is 0.9916. The biosensor has a good specificty as it does not cross-react with DNA of chicken and beef. To verify specificity towards the target sequence, PCR was tested to the target sequence and reacted to the PCR product with the biosensor. The PCR DNA isolate resulted in a 2.7 fold higher absorbance compared to pork-DNA isolate alone (without PCR). The sensitivity and specificity of the method show the promising application of the thiol-bond AuNP biosensor in pork-detection.

Keywords: biosensor, DNA probe, gold nanoparticle (AuNP), pork meat, qPCR

Procedia PDF Downloads 351
4463 Object Tracking in Motion Blurred Images with Adaptive Mean Shift and Wavelet Feature

Authors: Iman Iraei, Mina Sharifi

Abstract:

A method for object tracking in motion blurred images is proposed in this article. This paper shows that object tracking could be improved with this approach. We use mean shift algorithm to track different objects as a main tracker. But, the problem is that mean shift could not track the selected object accurately in blurred scenes. So, for better tracking result, and increasing the accuracy of tracking, wavelet transform is used. We use a feature named as blur extent, which could help us to get better results in tracking. For calculating of this feature, we should use Harr wavelet. We can look at this matter from two different angles which lead to determine whether an image is blurred or not and to what extent an image is blur. In fact, this feature left an impact on the covariance matrix of mean shift algorithm and cause to better performance of tracking. This method has been concentrated mostly on motion blur parameter. transform. The results reveal the ability of our method in order to reach more accurately tracking.

Keywords: mean shift, object tracking, blur extent, wavelet transform, motion blur

Procedia PDF Downloads 204
4462 Classification of IoT Traffic Security Attacks Using Deep Learning

Authors: Anum Ali, Kashaf ad Dooja, Asif Saleem

Abstract:

The future smart cities trend will be towards Internet of Things (IoT); IoT creates dynamic connections in a ubiquitous manner. Smart cities offer ease and flexibility for daily life matters. By using small devices that are connected to cloud servers based on IoT, network traffic between these devices is growing exponentially, whose security is a concerned issue, since ratio of cyber attack may make the network traffic vulnerable. This paper discusses the latest machine learning approaches in related work further to tackle the increasing rate of cyber attacks, machine learning algorithm is applied to IoT-based network traffic data. The proposed algorithm train itself on data and identify different sections of devices interaction by using supervised learning which is considered as a classifier related to a specific IoT device class. The simulation results clearly identify the attacks and produce fewer false detections.

Keywords: IoT, traffic security, deep learning, classification

Procedia PDF Downloads 140
4461 Optimum Parameter of a Viscous Damper for Seismic and Wind Vibration

Authors: Soltani Amir, Hu Jiaxin

Abstract:

Determination of optimal parameters of a passive control system device is the primary objective of this study. Expanding upon the use of control devices in wind and earthquake hazard reduction has led to development of various control systems. The advantage of non-linearity characteristics in a passive control device and the optimal control method using LQR algorithm are explained in this study. Finally, this paper introduces a simple approach to determine optimum parameters of a nonlinear viscous damper for vibration control of structures. A MATLAB program is used to produce the dynamic motion of the structure considering the stiffness matrix of the SDOF frame and the non-linear damping effect. This study concluded that the proposed system (variable damping system) has better performance in system response control than a linear damping system. Also, according to the energy dissipation graph, the total energy loss is greater in non-linear damping system than other systems.

Keywords: passive control system, damping devices, viscous dampers, control algorithm

Procedia PDF Downloads 460
4460 Node Pair Selection Scheme in Relay-Aided Communication Based on Stable Marriage Problem

Authors: Tetsuki Taniguchi, Yoshio Karasawa

Abstract:

This paper describes a node pair selection scheme in relay-aided multiple source multiple destination communication system based on stable marriage problem. A general case is assumed in which all of source, relay and destination nodes are equipped with multiantenna and carry out multistream transmission. Based on several metrics introduced from inter-node channel condition, the preference order is determined about all source-relay and relay-destination relations, and then the node pairs are determined using Gale-Shapley algorithm. The computer simulations show that the effectiveness of node pair selection is larger in multihop communication. Some additional aspects which are different from relay-less case are also investigated.

Keywords: relay, multiple input multiple output (MIMO), multiuser, amplify and forward, stable marriage problem, Gale-Shapley algorithm

Procedia PDF Downloads 392
4459 Intelligent Path Tracking Hybrid Fuzzy Controller for a Unicycle-Type Differential Drive Robot

Authors: Abdullah M. Almeshal, Mohammad R. Alenezi, Muhammad Moaz

Abstract:

In this paper, we discuss the performance of applying hybrid spiral dynamic bacterial chemotaxis (HSDBC) optimisation algorithm on an intelligent controller for a differential drive robot. A unicycle class of differential drive robot is utilised to serve as a basis application to evaluate the performance of the HSDBC algorithm. A hybrid fuzzy logic controller is developed and implemented for the unicycle robot to follow a predefined trajectory. Trajectories of various frictional profiles and levels were simulated to evaluate the performance of the robot at different operating conditions. Controller gains and scaling factors were optimised using HSDBC and the performance is evaluated in comparison to previously adopted optimisation algorithms. The HSDBC has proven its feasibility in achieving a faster convergence toward the optimal gains and resulted in a superior performance.

Keywords: differential drive robot, hybrid fuzzy controller, optimization, path tracking, unicycle robot

Procedia PDF Downloads 455
4458 Algorithms Inspired from Human Behavior Applied to Optimization of a Complex Process

Authors: S. Curteanu, F. Leon, M. Gavrilescu, S. A. Floria

Abstract:

Optimization algorithms inspired from human behavior were applied in this approach, associated with neural networks models. The algorithms belong to human behaviors of learning and cooperation and human competitive behavior classes. For the first class, the main strategies include: random learning, individual learning, and social learning, and the selected algorithms are: simplified human learning optimization (SHLO), social learning optimization (SLO), and teaching-learning based optimization (TLBO). For the second class, the concept of learning is associated with competitiveness, and the selected algorithms are sports-inspired algorithms (with Football Game Algorithm, FGA and Volleyball Premier League, VPL) and Imperialist Competitive Algorithm (ICA). A real process, the synthesis of polyacrylamide-based multicomponent hydrogels, where some parameters are difficult to obtain experimentally, is considered as a case study. Reaction yield and swelling degree are predicted as a function of reaction conditions (acrylamide concentration, initiator concentration, crosslinking agent concentration, temperature, reaction time, and amount of inclusion polymer, which could be starch, poly(vinyl alcohol) or gelatin). The experimental results contain 175 data. Artificial neural networks are obtained in optimal form with biologically inspired algorithm; the optimization being perform at two level: structural and parametric. Feedforward neural networks with one or two hidden layers and no more than 25 neurons in intermediate layers were obtained with values of correlation coefficient in the validation phase over 0.90. The best results were obtained with TLBO algorithm, correlation coefficient being 0.94 for an MLP(6:9:20:2) – a feedforward neural network with two hidden layers and 9 and 20, respectively, intermediate neurons. Good results obtained prove the efficiency of the optimization algorithms. More than the good results, what is important in this approach is the simulation methodology, including neural networks and optimization biologically inspired algorithms, which provide satisfactory results. In addition, the methodology developed in this approach is general and has flexibility so that it can be easily adapted to other processes in association with different types of models.

Keywords: artificial neural networks, human behaviors of learning and cooperation, human competitive behavior, optimization algorithms

Procedia PDF Downloads 98
4457 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building

Authors: Sreto Boljevic

Abstract:

Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.

Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy

Procedia PDF Downloads 207
4456 Public Wi-Fi Security Threat Evil Twin Attack Detection Based on Signal Variant and Hop Count

Authors: Said Abdul Ahad Ahadi, Elyas Baray, Nitin Rakesh, Sudeep Varshney

Abstract:

Wi-Fi is a widely used internet source that is used to provide internet access in many areas such as Stores, Cafes, University campuses, Restaurants and so on. This technology brought more facilities in communication and networking. On the other hand, due to the transmission of data over the air, which makes the network vulnerable, so it becomes prone to various threats such as Evil Twin and etc. The Evil Twin is a kind of adversary which impersonates a legitimate access point (LAP) as it can happen by spoofing the name (SSID) and MAC address (BSSID) of a legitimate access point (LAP). And this attack can cause many threats such as MITM, Service Interruption, Access point service blocking. Various Evil Twin Attack Detection Techniques are proposed, but they require additional hardware, or they require protocol modification. In this paper, we proposed a new technique based on Access Point’s two fingerprints, Received Signal Strength Indicator (RSSI) and Hop Count, that is hard to copy by an adversary. And we implemented the technique in a system called “ETDetector,” which can detect and prevent the attack.

Keywords: evil twin, LAP, SSID, Wi-Fi security, signal variation, ETAD, kali linux, scapy, python

Procedia PDF Downloads 135
4455 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 26
4454 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System

Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong

Abstract:

In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.

Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum

Procedia PDF Downloads 185
4453 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors

Authors: Dylan Elliott, Katya Pertsova

Abstract:

Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.

Keywords: BERT, grammatical error correction, preposition error detection, prepositions

Procedia PDF Downloads 135
4452 Automatic Post Stroke Detection from Computed Tomography Images

Authors: C. Gopi Jinimole, A. Harsha

Abstract:

For detecting strokes, Computed Tomography (CT) scan is preferred for imaging the abnormalities or infarction in the brain. Because of the problems in the window settings used to evaluate brain CT images, they are very poor in the early stage infarction detection. This paper presents an automatic estimation method for the window settings of the CT images for proper contrast of the hyper infarction present in the brain. In the proposed work the window width is estimated automatically for each slice and the window centre is changed to a new value of 31HU, which is the average of the HU values of the grey matter and white matter in the brain. The automatic window width estimation is based on the average of median of statistical central moments. Thus with the new suggested window centre and estimated window width, the hyper infarction or post-stroke regions in CT brain images are properly detected. The proposed approach assists the radiologists in CT evaluation for early quantitative signs of delayed stroke, which leads to severe hemorrhage in the future can be prevented by providing timely medication to the patients.

Keywords: computed tomography (CT), hyper infarction or post stroke region, Hounsefield Unit (HU), window centre (WC), window width (WW)

Procedia PDF Downloads 193
4451 Design of Non-uniform Circular Antenna Arrays Using Firefly Algorithm for Side Lobe Level Reduction

Authors: Gopi Ram, Durbadal Mandal, Rajib Kar, Sakti Prasad Ghoshal

Abstract:

A design problem of non-uniform circular antenna arrays for maximum reduction of both the side lobe level (SLL) and first null beam width (FNBW) is dealt with. This problem is modeled as a simple optimization problem. The method of Firefly algorithm (FFA) is used to determine an optimal set of current excitation weights and antenna inter-element separations that provide radiation pattern with maximum SLL reduction and much improvement on FNBW as well. Circular array antenna laid on x-y plane is assumed. FFA is applied on circular arrays of 8-, 10-, and 12- elements. Various simulation results are presented and hence performances of side lobe and FNBW are analyzed. Experimental results show considerable reductions of both the SLL and FNBW with respect to those of the uniform case and some standard algorithms GA, PSO, and SA applied to the same problem.

Keywords: circular arrays, first null beam width, side lobe level, FFA

Procedia PDF Downloads 244
4450 A Novel Algorithm for Parsing IFC Models

Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai

Abstract:

Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD).

Keywords: BIM, CAD, IFC, MVD

Procedia PDF Downloads 285
4449 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data

Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis

Abstract:

Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.

Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction

Procedia PDF Downloads 579
4448 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada

Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone

Abstract:

Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.

Keywords: cameras, monitoring, recreational fishing, stock assessment

Procedia PDF Downloads 112
4447 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 140
4446 Health Monitoring and Failure Detection of Electronic and Structural Components in Small Unmanned Aerial Vehicles

Authors: Gopi Kandaswamy, P. Balamuralidhar

Abstract:

Fully autonomous small Unmanned Aerial Vehicles (UAVs) are increasingly being used in many commercial applications. Although a lot of research has been done to develop safe, reliable and durable UAVs, accidents due to electronic and structural failures are not uncommon and pose a huge safety risk to the UAV operators and the public. Hence there is a strong need for an automated health monitoring system for UAVs with a view to minimizing mission failures thereby increasing safety. This paper describes our approach to monitoring the electronic and structural components in a small UAV without the need for additional sensors to do the monitoring. Our system monitors data from four sources; sensors, navigation algorithms, control inputs from the operator and flight controller outputs. It then does statistical analysis on the data and applies a rule based engine to detect failures. This information can then be fed back into the UAV and a decision to continue or abort the mission can be taken automatically by the UAV and independent of the operator. Our system has been verified using data obtained from real flights over the past year from UAVs of various sizes that have been designed and deployed by us for various applications.

Keywords: fault detection, health monitoring, unmanned aerial vehicles, vibration analysis

Procedia PDF Downloads 246
4445 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone

Procedia PDF Downloads 394
4444 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 310
4443 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation

Authors: Simiao Ren, En Wei

Abstract:

Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.

Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN

Procedia PDF Downloads 80
4442 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 150
4441 Design of Parity-Preserving Reversible Logic Signed Array Multipliers

Authors: Mojtaba Valinataj

Abstract:

Reversible logic as a new favorable design domain can be used for various fields especially creating quantum computers because of its speed and intangible power consumption. However, its susceptibility to a variety of environmental effects may lead to yield the incorrect results. In this paper, because of the importance of multiplication operation in various computing systems, some novel reversible logic array multipliers are proposed with error detection capability by incorporating the parity-preserving gates. The new designs are presented for two main parts of array multipliers, partial product generation and multi-operand addition, by exploiting the new arrangements of existing gates, which results in two signed parity-preserving array multipliers. The experimental results reveal that the best proposed 4×4 multiplier in this paper reaches 12%, 24%, and 26% enhancements in the number of constant inputs, number of required gates, and quantum cost, respectively, compared to previous design. Moreover, the best proposed design is generalized for n×n multipliers with general formulations to estimate the main reversible logic criteria as the functions of the multiplier size.

Keywords: array multipliers, Baugh-Wooley method, error detection, parity-preserving gates, quantum computers, reversible logic

Procedia PDF Downloads 249
4440 Mobile Crowdsensing Scheme by Predicting Vehicle Mobility Using Deep Learning Algorithm

Authors: Monojit Manna, Arpan Adhikary

Abstract:

In Mobile cloud sensing across the globe, an emerging paradigm is selected by the user to compute sensing tasks. In urban cities current days, Mobile vehicles are adapted to perform the task of data sensing and data collection for universality and mobility. In this work, we focused on the optimality and mobile nodes that can be selected in order to collect the maximum amount of data from urban areas and fulfill the required data in the future period within a couple of minutes. We map out the requirement of the vehicle to configure the maximum data optimization problem and budget. The Application implementation is basically set up to generalize a realistic online platform in which real-time vehicles are moving apparently in a continuous manner. The data center has the authority to select a set of vehicles immediately. A deep learning-based scheme with the help of mobile vehicles (DLMV) will be proposed to collect sensing data from the urban environment. From the future time perspective, this work proposed a deep learning-based offline algorithm to predict mobility. Therefore, we proposed a greedy approach applying an online algorithm step into a subset of vehicles for an NP-complete problem with a limited budget. Real dataset experimental extensive evaluations are conducted for the real mobility dataset in Rome. The result of the experiment not only fulfills the efficiency of our proposed solution but also proves the validity of DLMV and improves the quantity of collecting the sensing data compared with other algorithms.

Keywords: mobile crowdsensing, deep learning, vehicle recruitment, sensing coverage, data collection

Procedia PDF Downloads 65
4439 AI Applications in Accounting: Transforming Finance with Technology

Authors: Alireza Karimi

Abstract:

Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.

Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance

Procedia PDF Downloads 52
4438 Rapid and Cheap Test for Detection of Streptococcus pyogenes and Streptococcus pneumoniae with Antibiotic Resistance Identification

Authors: Marta Skwarecka, Patrycja Bloch, Rafal Walkusz, Oliwia Urbanowicz, Grzegorz Zielinski, Sabina Zoledowska, Dawid Nidzworski

Abstract:

Upper respiratory tract infections are one of the most common reasons for visiting a general doctor. Streptococci are the most common bacterial etiological factors in these infections. There are many different types of Streptococci and infections vary in severity from mild throat infections to pneumonia. For example, S. pyogenes mainly contributes to acute pharyngitis, palatine tonsils and scarlet fever, whereas S. Streptococcus pneumoniae is responsible for several invasive diseases like sepsis, meningitis or pneumonia with high mortality and dangerous complications. There are only a few diagnostic tests designed for detection Streptococci from the infected throat of patients. However, they are mostly based on lateral flow techniques, and they are not used as a standard due to their low sensitivity. The diagnostic standard is to culture patients throat swab on semi selective media in order to multiply pure etiological agent of infection and subsequently to perform antibiogram, which takes several days from the patients visit in the clinic. Therefore, the aim of our studies is to develop and implement to the market a Point of Care device for the rapid identification of Streptococcus pyogenes and Streptococcus pneumoniae with simultaneous identification of antibiotic resistance genes. In the course of our research, we successfully selected genes for to-species identification of Streptococci and genes encoding antibiotic resistance proteins. We have developed a reaction to amplify these genes, which allows detecting the presence of S. pyogenes or S. pneumoniae followed by testing their resistance to erythromycin, chloramphenicol and tetracycline. What is more, the detection of β-lactamase-encoding genes that could protect Streptococci against antibiotics from the ampicillin group, which are widely used in the treatment of this type of infection is also developed. The test is carried out directly from the patients' swab, and the results are available after 20 to 30 minutes after sample subjection, which could be performed during the medical visit.

Keywords: antibiotic resistance, Streptococci, respiratory infections, diagnostic test

Procedia PDF Downloads 117
4437 Pefloxacin as a Surrogate Marker for Ciprofloxacin Resistance in Salmonella: Study from North India

Authors: Varsha Gupta, Priya Datta, Gursimran Mohi, Jagdish Chander

Abstract:

Fluoroquinolones form the mainstay of therapy for the treatment of infections due to Salmonella enterica subsp. enterica. There is a complex interplay between several resistance mechanisms for quinolones and various fluoroquinolones discs, giving varying results, making detection and interpretation of fluoroquinolone resistance difficult. For detection of fluoroquinolone resistance in Salmonella ssp., we compared the use of pefloxacin and nalidixic acid discs as surrogate marker. Using MIC for ciprofloxacin as the gold standard, 43.5% of strains showed MIC as ≥1 μg/ml and were thus resistant to fluoroquinoloes. Based on the performance of nalidixic acid and pefloxacin discs as surrogate marker for ciprofloxacin resistance, both the discs could correctly detect all the resistant phenotypes; however, use of nalidixic acid disc showed false resistance in the majority of the sensitive phenotypes. We have also tested newer antimicrobial agents like cefixime, imipenem, tigecycline and azithromycin against Salmonella spp. Moreover, there was a comeback of susceptibility to older antimicrobials like ampicillin, chloramphenicol, and cotrimoxazole. We can also use cefixime, imipenem, tigecycline and azithromycin in the treatment of multidrug resistant S. typhi due to their high susceptibility.

Keywords: salmonella, pefloxacin, surrogate marker, chloramphenicol

Procedia PDF Downloads 972
4436 Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue

Authors: U.V. Suryawanshi, S.S. Chowhan, U.V Kulkarni

Abstract:

Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results.

Keywords: image segmentation, preprocessing, MRI, FCM, KFCM, SFCM, IFCM

Procedia PDF Downloads 317
4435 Improvement a Lower Bound of Energy for Some Family of Graphs, Related to Determinant of Adjacency Matrix

Authors: Saieed Akbari, Yousef Bagheri, Amir Hossein Ghodrati, Sima Saadat Akhtar

Abstract:

Let G be a simple graph with the vertex set V (G) and with the adjacency matrix A (G). The energy E (G) of G is defined to be the sum of the absolute values of all eigenvalues of A (G). Also let n and m be number of edges and vertices of the graph respectively. A regular graph is a graph where each vertex has the same number of neighbours. Given a graph G, its line graph L(G) is a graph such that each vertex of L(G) represents an edge of G; and two vertices of L(G) are adjacent if and only if their corresponding edges share a common endpoint in G. In this paper we show that for every regular graphs and also for every line graphs such that (G) 3 we have, E(G) 2nm + n 1. Also at the other part of the paper we prove that 2 (G) E(G) for an arbitrary graph G.

Keywords: eigenvalues, energy, line graphs, matching number

Procedia PDF Downloads 218