Search results for: computational intelligence
1242 Computational Aspects of Regression Analysis of Interval Data
Authors: Michal Cerny
Abstract:
We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Keywords: Linear regression, interval-censored data, computational complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14701241 Design and Control Strategy of Diffused Air Aeration System
Authors: Doaa M. Atia, Faten H. Fahmy, Ninet M. Ahmed, Hassen T. Dorrah
Abstract:
During the past decade, pond aeration systems have been developed which will sustain large quantities of fish and invertebrate biomass. Dissolved Oxygen (DO) is considered to be among the most important water quality parameters in fish culture. Fishponds in aquaculture farms are usually located in remote areas where grid lines are at far distance. Aeration of ponds is required to prevent mortality and to intensify production, especially when feeding is practical, and in warm regions. To increase pond production it is necessary to control dissolved oxygen. Artificial intelligence (AI) techniques are becoming useful as alternate approaches to conventional techniques or as components of integrated systems. They have been used to solve complicated practical problems in various areas and are becoming more and more popular nowadays. This paper presents a new design of diffused aeration system using fuel cell as a power source. Also fuzzy logic control Technique (FLC) is used for controlling the speed of air flow rate from the blower to air piping connected to the pond by adjusting blower speed. MATLAB SIMULINK results show high performance of fuzzy logic control (FLC).Keywords: aeration system, Fuel cell, Artificial intelligence (AI) techniques, fuzzy logic control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35151240 A Novel Tracking Method Using Filtering and Geometry
Authors: Sang Hoon Lee, Jong Sue Bae, Taewan Kim, Jin Mo Song, Jong Ju Kim
Abstract:
Image target detection and tracking methods based on target information such as intensity, shape model, histogram and target dynamics have been proven to be robust to target model variations and background clutters as shown by recent researches. However, no definitive answer has been given to occluded target by counter measure or limited field of view(FOV). In this paper, we will present a novel tracking method using filtering and computational geometry. This paper has two central goals: 1) to deal with vulnerable target measurements; and 2) to maintain target tracking out of FOV using non-target-originated information. The experimental results, obtained with airborne images, show a robust tracking ability with respect to the existing approaches. In exploring the questions of target tracking, this paper will be limited to consideration of airborne image.Keywords: Tracking, Computational geometry, Homography, Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17861239 Improvement over DV-Hop Localization Algorithm for Wireless Sensor Networks
Authors: Shrawan Kumar, D. K. Lobiyal
Abstract:
In this paper, we propose improved versions of DVHop algorithm as QDV-Hop algorithm and UDV-Hop algorithm for better localization without the need for additional range measurement hardware. The proposed algorithm focuses on third step of DV-Hop, first error terms from estimated distances between unknown node and anchor nodes is separated and then minimized. In the QDV-Hop algorithm, quadratic programming is used to minimize the error to obtain better localization. However, quadratic programming requires a special optimization tool box that increases computational complexity. On the other hand, UDV-Hop algorithm achieves localization accuracy similar to that of QDV-Hop by solving unconstrained optimization problem that results in solving a system of linear equations without much increase in computational complexity. Simulation results show that the performance of our proposed schemes (QDV-Hop and UDV-Hop) is superior to DV-Hop and DV-Hop based algorithms in all considered scenarios.Keywords: Wireless sensor networks, Error term, DV-Hop algorithm, Localization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22651238 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.
Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19531237 FPGA Based Implementation of Simplified Space Vector PWM Algorithm for Multilevel Inverter Fed Induction Motor Drives
Authors: Tapan Trivedi, Pramod Agarwal, Rajendrasinh Jadeja, Pragnesh Bhatt
Abstract:
Space Vector Pulse Width Modulation is popular for variable frequency drives. The method has several advantages over carried based PWM and is computation intensive. The implementation of SVPWM for multilevel inverter requires special attention and at the same time consumes considerable resources. Due to faster processing power and reduced over all computational burden, FPGAs are being investigated as an alternative for other controllers. In this paper, a space vector PWM algorithm is implemented using FPGA which requires less computational area and is modular in structure. The algorithm is verified experimentally for Neutral Point Clamped inverter using FPGA development board xc3s5000-4fg900.Keywords: Modular structure, Multilevel inverter, Space Vector PWM, Switching States.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24281236 CBCTL: A Reasoning System of TemporalEpistemic Logic with Communication Channel
Authors: Suguru Yoshioka, Satoshi Tojo
Abstract:
This paper introduces a temporal epistemic logic CBCTL that updates agent-s belief states through communications in them, based on computational tree logic (CTL). In practical environments, communication channels between agents may not be secure, and in bad cases agents might suffer blackouts. In this study, we provide inform* protocol based on ACL of FIPA, and declare the presence of secure channels between two agents, dependent on time. Thus, the belief state of each agent is updated along with the progress of time. We show a prover, that is a reasoning system for a given formula in a given a situation of an agent ; if it is directly provable or if it could be validated through the chains of communications, the system returns the proof.Keywords: communication channel, computational tree logic, reasoning system, temporal epistemic logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12471235 Fundamental Theory of the Evolution Force: Gene Engineering utilizing Synthetic Evolution Artificial Intelligence
Authors: L. K. Davis
Abstract:
The effects of the evolution force are observable in nature at all structural levels ranging from small molecular systems to conversely enormous biospheric systems. However, the evolution force and work associated with formation of biological structures has yet to be described mathematically or theoretically. In addressing the conundrum, we consider evolution from a unique perspective and in doing so we introduce the “Fundamental Theory of the Evolution Force: FTEF”. We utilized synthetic evolution artificial intelligence (SYN-AI) to identify genomic building blocks and to engineer 14-3-3 ζ docking proteins by transforming gene sequences into time-based DNA codes derived from protein hierarchical structural levels. The aforementioned served as templates for random DNA hybridizations and genetic assembly. The application of hierarchical DNA codes allowed us to fast forward evolution, while dampening the effect of point mutations. Natural selection was performed at each hierarchical structural level and mutations screened using Blosum 80 mutation frequency-based algorithms. Notably, SYN-AI engineered a set of three architecturally conserved docking proteins that retained motion and vibrational dynamics of native Bos taurus 14-3-3 ζ.Keywords: 14-3-3 docking genes, synthetic protein design, time based DNA codes, writing DNA code from scratch.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6641234 Investigation of Flow Characteristics on Upstream and Downstream of Orifice Using Computational Fluid Dynamics
Authors: War War Min Swe, Aung Myat Thu, Khin Cho Thet, Zaw Moe Htet, Thuzar Mon
Abstract:
The main parameter of the orifice hole diameter was designed according to the range of throttle diameter ratio which gave the required discharge coefficient. The discharge coefficient is determined by difference diameter ratios. The value of discharge coefficient is 0.958 occurred at throttle diameter ratio 0.5. The throttle hole diameter is 80 mm. The flow analysis is done numerically using ANSYS 17.0, computational fluid dynamics. The flow velocity was analyzed in the upstream and downstream of the orifice meter. The downstream velocity of non-standard orifice meter is 2.5% greater than that of standard orifice meter. The differential pressure is 515.379 Pa in standard orifice.
Keywords: CFD-CFX, discharge coefficients, flow characteristics, inclined.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5741233 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems
Authors: Juhi Faridi, Mohd. Ajmal Kafeel
Abstract:
The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS. Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.
Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12141232 An AI-Generated Semantic Communication Platform in Human-Computer Interaction Course
Authors: Yi Yang, Jiasong Sun
Abstract:
Almost every aspect of our daily lives is now intertwined with some degree of Human-Computer Interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology and more. The HCI courses in the Department of Electronics at Tsinghua University, known as the Media and Cognition course, is constantly updated to reflect the most advanced technological advances, such as virtual reality, augmented reality and artificial intelligence-based interaction. For more than a decade, this course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which has gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. The latest version of the HCI course practices a semantic communication platform based on AI-generated techniques. We explored a student-centered model and proposed an interest-based teaching method. Students are no longer just recipients of knowledge, but become active participants in the learning process driven by personal interests, thereby encouraging students to take responsibility for their own education. One of the latest results of this teaching approach in the course "Media and Cognition" is a student proposal to develop a semantic communication platform rooted in artificial intelligence generative technologies. The platform solves a key challenge in communications technology: the ability to preserve visual signals. The interest-based approach emphasizes personal curiosity and active participation, and the proposal of an artificial intelligence-generated semantic communication platform is an example and successful result of how students can exert greater creativity when they have the power to control their own learning.
Keywords: Human-computer interaction, media and cognition course, semantic communication, retain ability, prompts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641231 A Numerical Simulation of the Indoor Air Flow
Authors: Karel Frana, Jianshun S. Zhang, Milos Muller
Abstract:
The indoor airflow with a mixed natural/forced convection was numerically calculated using the laminar and turbulent approach. The Boussinesq approximation was considered for a simplification of the mathematical model and calculations. The results obtained, such as mean velocity fields, were successfully compared with experimental PIV flow visualizations. The effect of the distance between the cooled wall and the heat exchanger on the temperature and velocity distributions was calculated. In a room with a simple shape, the computational code OpenFOAM demonstrated an ability to numerically predict flow patterns. Furthermore, numerical techniques, boundary type conditions and the computational grid quality were examined. Calculations using the turbulence model k-omega had a significant effect on the results influencing temperature and velocity distributions.Keywords: natural and forced convections, numerical simulations, indoor airflows.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32061230 Numerical Simulation of Fluid Structure Interaction Using Two-Way Method
Authors: Samira Laidaoui, Mohammed Djermane, Nazihe Terfaya
Abstract:
The fluid-structure coupling is a natural phenomenon which reflects the effects of two continuums: fluid and structure of different types in the reciprocal action on each other, involving knowledge of elasticity and fluid mechanics. The solution for such problems is based on the relations of continuum mechanics and is mostly solved with numerical methods. It is a computational challenge to solve such problems because of the complex geometries, intricate physics of fluids, and complicated fluid-structure interactions. The way in which the interaction between fluid and solid is described gives the largest opportunity for reducing the computational effort. In this paper, a problem of fluid structure interaction is investigated with two-way coupling method. The formulation Arbitrary Lagrangian-Eulerian (ALE) was used, by considering a dynamic grid, where the solid is described by a Lagrangian formulation and the fluid by a Eulerian formulation. The simulation was made on the ANSYS software.
Keywords: ALE, coupling, FEM, fluid-structure interaction, one-way method, two-way method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15121229 Using Divergent Nozzle with Aerodynamic Lens to Focus Nanoparticles
Authors: Hasan Jumaah Mrayeh, Fue-Sang Lien
Abstract:
ANSYS Fluent will be used to simulate Computational Fluid Dynamics (CFD) for an efficient lens and nozzle design which will be explained in this paper. We have designed and characterized an aerodynamic lens and a divergent nozzle for focusing flow that transmits sub 25 nm particles through the aerodynamic lens. The design of the lens and nozzle has been improved using CFD for particle trajectories. We obtained a case for calculating nanoparticles (25 nm) flowing through the aerodynamic lens and divergent nozzle. Nanoparticles are transported by air, which is pumped into the aerodynamic lens through the nozzle at 1 atmospheric pressure. We have also developed a computational methodology that can determine the exact focus characteristics of aerodynamic lens systems. Particle trajectories were traced using the Lagrange approach. The simulation shows the ability of the aerodynamic lens to focus on 25 nm particles after using a divergent nozzle.
Keywords: Aerodynamic lens AL, divergent nozzle DN, ANSYS Fluent, Lagrange approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10001228 A Signal Driven Adaptive Resolution Short-Time Fourier Transform
Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin
Abstract:
The frequency contents of the non-stationary signals vary with time. For proper characterization of such signals, a smart time-frequency representation is necessary. Classically, the STFT (short-time Fourier transform) is employed for this purpose. Its limitation is the fixed timefrequency resolution. To overcome this drawback an enhanced STFT version is devised. It is based on the signal driven sampling scheme, which is named as the cross-level sampling. It can adapt the sampling frequency and the window function (length plus shape) by following the input signal local variations. This adaptation results into the proposed technique appealing features, which are the adaptive time-frequency resolution and the computational efficiency.Keywords: Level Crossing Sampling, Activity Selection, Adaptive Resolution Analysis, Computational Complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15711227 A Method of Representing Knowledge of Toolkits in a Pervasive Toolroom Maintenance System
Authors: A. Mohamed Mydeen, Pallapa Venkataram
Abstract:
The learning process needs to be so pervasive to impart the quality in acquiring the knowledge about a subject by making use of the advancement in the field of information and communication systems. However, pervasive learning paradigms designed so far are system automation types and they lack in factual pervasive realm. Providing factual pervasive realm requires subtle ways of teaching and learning with system intelligence. Augmentation of intelligence with pervasive learning necessitates the most efficient way of representing knowledge for the system in order to give the right learning material to the learner. This paper presents a method of representing knowledge for Pervasive Toolroom Maintenance System (PTMS) in which a learner acquires sublime knowledge about the various kinds of tools kept in the toolroom and also helps for effective maintenance of the toolroom. First, we explicate the generic model of knowledge representation for PTMS. Second, we expound the knowledge representation for specific cases of toolkits in PTMS. We have also presented the conceptual view of knowledge representation using ontology for both generic and specific cases. Third, we have devised the relations for pervasive knowledge in PTMS. Finally, events are identified in PTMS which are then linked with pervasive data of toolkits based on relation formulated. The experimental environment and case studies show the accuracy and efficient knowledge representation of toolkits in PTMS.Keywords: Generic knowledge representation, toolkit, toolroom, pervasive computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20291226 Balancing Neural Trees to Improve Classification Performance
Authors: Asha Rani, Christian Micheloni, Gian Luca Foresti
Abstract:
In this paper, a neural tree (NT) classifier having a simple perceptron at each node is considered. A new concept for making a balanced tree is applied in the learning algorithm of the tree. At each node, if the perceptron classification is not accurate and unbalanced, then it is replaced by a new perceptron. This separates the training set in such a way that almost the equal number of patterns fall into each of the classes. Moreover, each perceptron is trained only for the classes which are present at respective node and ignore other classes. Splitting nodes are employed into the neural tree architecture to divide the training set when the current perceptron node repeats the same classification of the parent node. A new error function based on the depth of the tree is introduced to reduce the computational time for the training of a perceptron. Experiments are performed to check the efficiency and encouraging results are obtained in terms of accuracy and computational costs.Keywords: Neural Tree, Pattern Classification, Perceptron, Splitting Nodes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12251225 Computational Modeling of Combustion Wave in Nanoscale Thermite Reaction
Authors: Kyoungjin Kim
Abstract:
Nanoscale thermites such as the composite mixture of nano-sized aluminum and molybdenum trioxide powders possess several technical advantages such as much higher reaction rate and shorter ignition delay, when compared to the conventional energetic formulations made of micron-sized metal and oxidizer particles. In this study, the self-propagation of combustion wave in compacted pellets of nanoscale thermite composites is modeled and computationally investigated by utilizing the activation energy reduction of aluminum particles due to nanoscale particle sizes. The present computational model predicts the speed of combustion wave propagation which is good agreement with the corresponding experiments of thermite reaction. Also, several characteristics of thermite reaction in nanoscale composites are discussed including the ignition delay and combustion wave structures.
Keywords: Nanoparticles, Thermite reaction, Combustion wave, Numerical modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24471224 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles
Authors: Dipak Sen, Rajdeep Ghosh
Abstract:
This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.Keywords: Friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21291223 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.
Keywords: Artificial Intelligence, machine learning, deep learning, convolutional neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12571222 Consistent Modeling of Functional Dependencies along with World Knowledge
Authors: Sven Rebhan, Nils Einecke, Julian Eggert
Abstract:
In this paper we propose a method for vision systems to consistently represent functional dependencies between different visual routines along with relational short- and long-term knowledge about the world. Here the visual routines are bound to visual properties of objects stored in the memory of the system. Furthermore, the functional dependencies between the visual routines are seen as a graph also belonging to the object-s structure. This graph is parsed in the course of acquiring a visual property of an object to automatically resolve the dependencies of the bound visual routines. Using this representation, the system is able to dynamically rearrange the processing order while keeping its functionality. Additionally, the system is able to estimate the overall computational costs of a certain action. We will also show that the system can efficiently use that structure to incorporate already acquired knowledge and thus reduce the computational demand.Keywords: Adaptive systems, Knowledge representation, Machinevision, Systems engineering
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16961221 Concept for Determining the Focus of Technology Monitoring Activities
Authors: Guenther Schuh, Christina Koenig, Nico Schoen, Markus Wellensiek
Abstract:
Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore, many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology IPT. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to align monitoring activities. Current as well as planned product, production and material technologies and existing skills, capabilities and resources form the basis for derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.
Keywords: Monitoring radar, search field, technology intelligence, technology monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32541220 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.
Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12911219 Warning about the Risk of Blood Flow Stagnation after Transcatheter Aortic Valve Implantation
Authors: Aymen Laadhari, Gábor Székely
Abstract:
In this work, the hemodynamics in the sinuses of Valsalva after Transcatheter Aortic Valve Implantation is numerically examined. We focus on the physical results in the two-dimensional case. We use a finite element methodology based on a Lagrange multiplier technique that enables to couple the dynamics of blood flow and the leaflets’ movement. A massively parallel implementation of a monolithic and fully implicit solver allows more accuracy and significant computational savings. The elastic properties of the aortic valve are disregarded, and the numerical computations are performed under physiologically correct pressure loads. Computational results depict that blood flow may be subject to stagnation in the lower domain of the sinuses of Valsalva after Transcatheter Aortic Valve Implantation.
Keywords: Hemodynamics, Transcatheter Aortic Valve Implantation, blood flow stagnation, numerical simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10971218 Effect of Sand Particle Transportation in Oil and Gas Pipeline Erosion
Authors: Christopher Deekia Nwimae, Nigel Simms, Liyun Lao
Abstract:
Erosion in a pipe bends caused by particles is a major concern in the oil and gas fields and might cause breakdown to production equipment. This work investigates the effect of sand particle transport in an elbow using computational fluid dynamics (CFD) approach. Two-way coupled Euler-Lagrange and discrete phase model is employed to calculate the air/solid particle flow in the elbow. Generic erosion model in Ansys fluent and three particle rebound models are used to predict the erosion rate on the 90° elbows. The model result is compared with experimental data from the open literature validating the CFD-based predictions which reveals that due to the sand particles impinging on the wall of the elbow at high velocity, a point on the pipe elbow were observed to have started turning red due to velocity increase and the maximum erosion locations occur at 48°.
Keywords: Erosion, prediction, elbow, computational fluid dynamics, CFD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4991217 Artificial Intelligence-Based Chest X-Ray Test of COVID-19 Patients
Authors: Dhurgham Al-Karawi, Nisreen Polus, Shakir Al-Zaidi, Sabah Jassim
Abstract:
The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.
Keywords: COVID-19, chest x-ray scan, artificial intelligence, texture analysis, local binary pattern transform, Gabor filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6771216 Web–Based Tools and Databases for Micro-RNA Analysis: A Review
Authors: Sitansu Kumar Verma, Soni Yadav, Jitendra Singh, Shraddha, Ajay Kumar
Abstract:
MicroRNAs (miRNAs), a class of approximately 22 nucleotide long non coding RNAs which play critical role in different biological processes. The mature microRNA is usually 19–27 nucleotides long and is derived from a bigger precursor that folds into a flawed stem-loop structure. Mature micro RNAs are involved in many cellular processes that encompass development, proliferation, stress response, apoptosis, and fat metabolism by gene regulation. Resent finding reveals that certain viruses encode their own miRNA that processed by cellular RNAi machinery. In recent research indicate that cellular microRNA can target the genetic material of invading viruses. Cellular microRNA can be used in the virus life cycle; either to up regulate or down regulate viral gene expression Computational tools use in miRNA target prediction has been changing drastically in recent years. Many of the methods have been made available on the web and can be used by experimental researcher and scientist without expert knowledge of bioinformatics. With the development and ease of use of genomic technologies and computational tools in the field of microRNA biology has superior tremendously over the previous decade. This review attempts to give an overview over the genome wide approaches that have allow for the discovery of new miRNAs and development of new miRNA target prediction tools and databases.
Keywords: MicroRNAs, computational tools, gene regulation, databases, RNAi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31841215 Performance Analysis and Optimization for Diagonal Sparse Matrix-Vector Multiplication on Machine Learning Unit
Authors: Qiuyu Dai, Haochong Zhang, Xiangrong Liu
Abstract:
Efficient matrix-vector multiplication with diagonal sparse matrices is pivotal in a multitude of computational domains, ranging from scientific simulations to machine learning workloads. When encoded in the conventional Diagonal (DIA) format, these matrices often induce computational overheads due to extensive zero-padding and non-linear memory accesses, which can hamper the computational throughput, and elevate the usage of precious compute and memory resources beyond necessity. The ’DIA-Adaptive’ approach, a methodological enhancement introduced in this paper, confronts these challenges head-on by leveraging the advanced parallel instruction sets embedded within Machine Learning Units (MLUs). This research presents a thorough analysis of the DIA-Adaptive scheme’s efficacy in optimizing Sparse Matrix-Vector Multiplication (SpMV) operations. The scope of the evaluation extends to a variety of hardware architectures, examining the repercussions of distinct thread allocation strategies and cluster configurations across multiple storage formats. A dedicated computational kernel, intrinsic to the DIA-Adaptive approach, has been meticulously developed to synchronize with the nuanced performance characteristics of MLUs. Empirical results, derived from rigorous experimentation, reveal that the DIA-Adaptive methodology not only diminishes the performance bottlenecks associated with the DIA format but also exhibits pronounced enhancements in execution speed and resource utilization. The analysis delineates a marked improvement in parallelism, showcasing the DIA-Adaptive scheme’s ability to adeptly manage the interplay between storage formats, hardware capabilities, and algorithmic design. The findings suggest that this approach could set a precedent for accelerating SpMV tasks, thereby contributing significantly to the broader domain of high-performance computing and data-intensive applications.
Keywords: Adaptive method, DIA, diagonal sparse matrices, MLU, sparse matrix-vector multiplication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341214 Artificial Neural Networks Technique for Seismic Hazard Prediction Using Seismic Bumps
Authors: Belkacem Selma, Boumediene Selma, Samira Chouraqui, Hanifi Missoum, Tourkia Guerzou
Abstract:
Natural disasters have occurred and will continue to cause human and material damage. Therefore, the idea of "preventing" natural disasters will never be possible. However, their prediction is possible with the advancement of technology. Even if natural disasters are effectively inevitable, their consequences may be partly controlled. The rapid growth and progress of artificial intelligence (AI) had a major impact on the prediction of natural disasters and risk assessment which are necessary for effective disaster reduction. Earthquake prediction to prevent the loss of human lives and even property damage is an important factor; that, is why it is crucial to develop techniques for predicting this natural disaster. This study aims to analyze the ability of artificial neural networks (ANNs) to predict earthquakes that occur in a given area. The used data describe the problem of high energy (higher than 104 J) seismic bumps forecasting in a coal mine using two long walls as an example. For this purpose, seismic bumps data obtained from mines have been analyzed. The results obtained show that the ANN is able to predict earthquake parameters with high accuracy; the classification accuracy through neural networks is more than 94%, and the models developed are efficient and robust and depend only weakly on the initial database.
Keywords: Earthquake prediction, artificial intelligence, AI, Artificial Neural Network, ANN, seismic bumps.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11861213 Low Complexity Hybrid Scheme for PAPR Reduction in OFDM Systems Based on SLM and Clipping
Authors: V. Sudha, D. Sriram Kumar
Abstract:
In this paper, we present a low complexity hybrid scheme using conventional selective mapping (C-SLM) and clipping algorithms to reduce the high peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signal. In the proposed scheme, the input data sequence (X) is divided into two sub-blocks, then clipping algorithm is applied to the first sub-block, whereas C-SLM algorithm is applied to the second sub-block in order to reduce both computational complexity and PAPR. The resultant time domain OFDM signal is obtained by combining the output of two sub-blocks. The simulation results show that the proposed hybrid scheme provides 0.45 dB PAPR reduction gain at CCDF value of 10-2 and 52% of computational complexity reduction when compared to C-SLM scheme at the expense of slight degradation in bit error rate (BER) performance.Keywords: CCDF, Clipping, OFDM, PAPR, SLM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271