Search results for: input processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5582

Search results for: input processing

4472 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 157
4471 Architecture for Multi-Unmanned Aerial Vehicles Based Autonomous Precision Agriculture Systems

Authors: Ebasa Girma, Nathnael Minyelshowa, Lebsework Negash

Abstract:

The use of unmanned aerial vehicles (UAVs) in precision agriculture has seen a huge increase recently. As such, systems that aim to apply various algorithms on the field need a structured framework of abstractions. This paper defines the various tasks of the UAVs in precision agriculture and models them into an architectural framework. The presented architecture is built on the context that there will be minimal physical intervention to do the tasks defined with multiple coordinated and cooperative UAVs. Various tasks such as image processing, path planning, communication, data acquisition, and field mapping are employed in the architecture to provide an efficient system. Besides, different limitation for applying Multi-UAVs in precision agriculture has been considered in designing the architecture. The architecture provides an autonomous end-to-end solution, starting from mission planning, data acquisition, and image processing framework that is highly efficient and can enable farmers to comprehensively deploy UAVs onto their lands. Simulation and field tests show that the architecture offers a number of advantages that include fault-tolerance, robustness, developer, and user-friendliness.

Keywords: deep learning, multi-UAVs, precision agriculture, UAVs architecture

Procedia PDF Downloads 109
4470 Concussion: Clinical and Vocational Outcomes from Sport Related Mild Traumatic Brain Injury

Authors: Jack Nash, Chris Simpson, Holly Hurn, Ronel Terblanche, Alan Mistlin

Abstract:

There is an increasing incidence of mild traumatic brain injury (mTBI) cases throughout sport and with this, a growing interest from governing bodies to ensure these are managed appropriately and player welfare is prioritised. The Berlin consensus statement on concussion in sport recommends a multidisciplinary approach when managing those patients who do not have full resolution of mTBI symptoms. There are as of yet no standardised guideline to follow in the treatment of complex cases mTBI in athletes. The aim of this project was to analyse the outcomes, both clinical and vocational, of all patients admitted to the mild Traumatic Brain Injury (mTBI) service at the UK’s Defence Military Rehabilitation Centre Headley Court between 1st June 2008 and 1st February 2017, as a result of a sport induced injury, and evaluate potential predictive indicators of outcome. Patients were identified from a database maintained by the mTBI service. Clinical and occupational outcomes were ascertained from medical and occupational employment records, recorded prospectively, at time of discharge from the mTBI service. Outcomes were graded based on the vocational independence scale (VIS) and clinical documentation at discharge. Predictive indicators including referral time, age at time of injury, previous mental health diagnosis and a financial claim in place at time of entry to service were assessed using logistic regression. 45 Patients were treated for sport-related mTBI during this time frame. Clinically 96% of patients had full resolution of their mTBI symptoms after input from the mTBI service. 51% of patients returned to work at their previous vocational level, 4% had ongoing mTBI symptoms, 22% had ongoing physical rehabilitation needs, 11% required mental health input and 11% required further vestibular rehabilitation. Neither age, time to referral, pre-existing mental health condition nor compensation seeking had a significant impact on either vocational or clinical outcome in this population. The vast majority of patients reviewed in the mTBI clinic had persistent symptoms which could not be managed in primary care. A consultant-led, multidisciplinary approach to the diagnosis and management of mTBI has resulted in excellent clinical outcomes in these complex cases. High levels of symptom resolution suggest that this referral and treatment pathway is successful and is a model which could be replicated in other organisations with consultant led input. Further understanding of both predictive and individual factors would allow clinicians to focus treatments on those who are most likely to develop long-term complications following mTBI. A consultant-led, multidisciplinary service ensures a large number of patients will have complete resolution of mTBI symptoms after sport-related mTBI. Further research is now required to ascertain the key predictive indicators of outcome following sport-related mTBI.

Keywords: brain injury, concussion, neurology, rehabilitation, sports injury

Procedia PDF Downloads 149
4469 SVM-RBN Model with Attentive Feature Culling Method for Early Detection of Fruit Plant Diseases

Authors: Piyush Sharma, Devi Prasad Sharma, Sulabh Bansal

Abstract:

Diseases are fairly common in fruits and vegetables because of the changing climatic and environmental circumstances. Crop diseases, which are frequently difficult to control, interfere with the growth and output of the crops. Accurate disease detection and timely disease control measures are required to guarantee high production standards and good quality. In India, apples are a common crop that may be afflicted by a variety of diseases on the fruit, stem, and leaves. It is fungi, bacteria, and viruses that trigger the early symptoms of leaf diseases. In order to assist farmers and take the appropriate action, it is important to develop an automated system that can be used to detect the type of illnesses. Machine learning-based image processing can be used to: this research suggested a system that can automatically identify diseases in apple fruit and apple plants. Hence, this research utilizes the hybrid SVM-RBN model. As a consequence, the model may produce results that are more effective in terms of accuracy, precision, recall, and F1 Score, with respective values of 96%, 99%, 94%, and 93%.

Keywords: fruit plant disease, crop disease, machine learning, image processing, SVM-RBN

Procedia PDF Downloads 57
4468 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 119
4467 Optimizing the Public Policy Information System under the Environment of E-Government

Authors: Qian Zaijian

Abstract:

E-government is one of the hot issues in the current academic research of public policy and management. As the organic integration of information and communication technology (ICT) and public administration, e-government is one of the most important areas in contemporary information society. Policy information system is a basic subsystem of public policy system, its operation affects the overall effect of the policy process or even exerts a direct impact on the operation of a public policy and its success or failure. The basic principle of its operation is information collection, processing, analysis and release for a specific purpose. The function of E-government for public policy information system lies in the promotion of public access to the policy information resources, information transmission through e-participation, e-consultation in the process of policy analysis and processing of information and electronic services in policy information stored, to promote the optimization of policy information systems. However, due to many factors, the function of e-government to promote policy information system optimization has its practical limits. In the building of E-government in our country, we should take such path as adhering to the principle of freedom of information, eliminating the information divide (gap), expanding e-consultation, breaking down information silos and other major path, so as to promote the optimization of public policy information systems.

Keywords: China, e-consultation, e-democracy, e-government, e-participation, ICTs, public policy information systems

Procedia PDF Downloads 857
4466 Isolation Enhancement of Compact Dual-Band Printed Multiple Input Multiple Output Antenna for WLAN Applications

Authors: Adham M. Salah, Tariq A. Nagem, Raed A. Abd-Alhameed, James M. Noras

Abstract:

Recently, the demand for wireless communications systems to cover more than one frequency band (multi-band) with high data rate has been increased for both fixed and mobile services. Multiple Input Multiple Output (MIMO) technology is one of the significant solutions for attaining these requirements and to achieve the maximum channel capacity of the wireless communications systems. The main issue associated with MIMO antennas especially in portable devices is the compact space between the radiating elements which leads to limit the physical separation between them. This issue exacerbates the performance of the MIMO antennas by increasing the mutual coupling between the radiating elements. In other words, the mutual coupling will be stronger if the radiating elements of the MIMO antenna are closer. This paper presents a low–profile dual-band (2×1) MIMO antenna that works at 2.4GHz, 5.3GHz and 5.8GHz for wireless local area networks (WLAN) applications. A neutralization line (NL) technique for enhancing the isolation has been used by introducing a strip line with a length of λg/4 at the isolation frequency (2.4GHz) between the radiating elements. The overall dimensions of the antenna are 33.5 x 36 x 1.6 mm³. The fabricated prototype shows a good agreement between the simulated and measured results. The antenna impedance bandwidths are 2.38–2.75 GHz and 4.4–6 GHz for the lower and upper band respectively; the reflection coefficient and mutual coupling are better than -25 dB in both lower and higher bands. The MIMO antenna performance characteristics are reported in terms of the scattering parameters, envelope correlation coefficient (ECC), total active reflection coefficient, capacity loss, antenna gain, and radiation patterns. Analysis of these characteristics indicates that the design is appropriate for the WLAN terminal applications.

Keywords: ECC, neutralization line, MIMO antenna, multi-band, mutual coupling, WLAN

Procedia PDF Downloads 130
4465 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery

Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas

Abstract:

The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.

Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition

Procedia PDF Downloads 142
4464 High Input Driven Factors in Idea Campaigns in Large Organizations: A Case Depicting Best Practices

Authors: Babar Rasheed, Saad Ghafoor

Abstract:

Introduction: Idea campaigns are commonly held across organizations for generating employee engagement. The contribution is specifically designed to identify and solve prevalent issues. It is argued that numerous organizations fail to achieve their desired goals despite arranging for such campaigns and investing heavily in them. There are however practices that organizations use to achieve higher degree of effectiveness, and these practices may be up for exploration by research to make them usable for the other organizations. Purpose: The aim of this research is to surface the idea management practices of a leading electric company with global operations. The study involves a large sized, multi site organization that is attributed to have added challenges in terms of managing ideas from employees, in comparison to smaller organizations. The study aims to highlight the factors that are looked at as the idea management team strategies for the campaign, sets terms and rewards for it, makes follow up with the employees and lastly, evaluate and award ideas. Methodology: The study is conducted in a leading electric appliance corporation that has a large number of employees and is based in numerous regions of the world. A total of 7 interviews are carried out involving the chief innovation officer, innovation manager and members of idea management and evaluation teams. The interviews are carried out either on Skype or in-person based on the availability of the interviewee. Findings: While this being a working paper and while the study is under way, it is anticipated that valuable information is being achieved about specific details on how idea management systems are governed and how idea campaigns are carried out. The findings may be particularly useful for innovation consultants as resources they can use to promote idea campaigning. The usefulness of the best practices highlighted as a result is, in any case, the most valuable output of this study.

Keywords: employee engagement, motivation, idea campaigns, large organizations, best practices, employees input, organizational output

Procedia PDF Downloads 168
4463 Comparative Efficacy of Gas Phase Sanitizers for Inactivating Salmonella, Escherichia coli O157:H7 and Listeria monocytogenes on Intact Lettuce Heads

Authors: Kayla Murray, Andrew Green, Gopi Paliyath, Keith Warriner

Abstract:

Introduction: It is now acknowledged that control of human pathogens associated with fresh produce requires an integrated approach of several interventions as opposed to relying on post-harvest washes to remove field acquired contamination. To this end, current research is directed towards identifying such interventions that can be applied at different points in leafy green processing. Purpose: In the following the efficacy of different gas phase treatments to decontaminate whole lettuce heads during pre-processing storage were evaluated. Methods: Whole Cos lettuce heads were spot inoculated with L. monocytogenes, E. coli O157:H7 or Salmonella spp. The inoculated lettuce heads were then placed in a treatment chamber and exposed to ozone, chlorine dioxide or hydroxyl radicals at different time periods under a range of relative humidity. Survivors of the treatments were enumerated along with sensory analysis performed on the treated lettuce. Results: Ozone gas reduced L. monocytogenes by 2-log10 after ten-minutes of exposure with Salmonella and E. coli O157:H7 being decreased by 0.66 and 0.56-log cfu respectively. Chlorine dioxide gas treatment reduced L. monocytogenes and Salmonella on lettuce heads by 4 log cfu but only supported a 0.8 log cfu reduction in E. coli O157:H7 numbers. In comparison, hydroxyl radicals supported a 2.9 – 4.8 log cfu reduction of model human pathogens inoculated onto lettuce heads but required extended exposure times and relative humidity < 0.8. Significance: From the gas phase sanitizers tested, chlorine dioxide and hydroxyl radicals are the most effective. The latter process holds most promise based on the ease of delivery, worker safety and preservation of lettuce sensory characteristics. Although expose times for hydroxyl radicles was relatively long (24h) this should not be considered a limitation given the intervention is applied in store rooms or in transport containers during transit.

Keywords: gas phase sanitizers, iceberg lettuce heads, leafy green processing

Procedia PDF Downloads 405
4462 Contrasting Infrastructure Sharing and Resource Substitution Synergies Business Models

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) rely on two modes of cooperation that are infrastructure sharing and resource substitution to obtain economic and environmental benefits. The former consists in the intensification of use of an asset while the latter is based on the use of waste, fatal energy (and utilities) as alternatives to standard inputs. Both modes, in fact, rely on the shift from a business-as-usual functioning towards an alternative production system structure so that in a business point of view the distinction is not clear. In order to investigate the way those cooperation modes can be distinguished, we consider the stakeholders' interplay in the business model structure regarding their resources and requirements. For infrastructure sharing (following economic engineering literature) the cost function of capacity induces economies of scale so that demand pooling reduces global expanses. Grassroot investment sizing decision and the ex-post pricing strongly depends on the design optimization phase for capacity sizing whereas ex-post operational cost sharing minimizing budgets are less dependent upon production rates. Value is then mainly design driven. For resource substitution, synergies value stems from availability and is at risk regarding both supplier and user load profiles and market prices of the standard input. Baseline input purchasing cost reduction is thus more driven by the operational phase of the symbiosis and must be analyzed within the whole sourcing policy (including diversification strategies and expensive back-up replacement). Moreover, while resource substitution involves a chain of intermediate processors to match quality requirements, the infrastructure model relies on a single operator whose competencies allow to produce non-rival goods. Transaction costs appear higher in resource substitution synergies due to the high level of customization which induces asset specificity, and non-homogeneity following transaction costs economics arguments.

Keywords: business model, capacity, sourcing, synergies

Procedia PDF Downloads 171
4461 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors

Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller

Abstract:

In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.

Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault

Procedia PDF Downloads 47
4460 The Secrecy Capacity of the Semi-Deterministic Wiretap Channel with Three State Information

Authors: Mustafa El-Halabi

Abstract:

A general model of wiretap channel with states is considered, where the legitimate receiver and the wiretapper’s observations depend on three states S1, S2 and S3. State S1 is non-causally known to the encoder, S2 is known to the receiver, and S3 remains unknown. A secure coding scheme, based using structured-binning, is proposed, and it is shown to achieve the secrecy capacity when the signal at legitimate receiver is a deterministic function of the input.

Keywords: physical layer security, interference, side information, secrecy capacity

Procedia PDF Downloads 383
4459 Using Bidirectional Encoder Representations from Transformers to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Authors: Maryam Heidari, James H. Jones Jr.

Abstract:

Millions of online posts about different topics and products are shared on popular social media platforms. One use of this content is to provide crowd-sourced information about a specific topic, event or product. However, this use raises an important question: what percentage of information available through these services is trustworthy? In particular, might some of this information be generated by a machine, i.e., a bot, instead of a human? Bots can be, and often are, purposely designed to generate enough volume to skew an apparent trend or position on a topic, yet the consumer of such content cannot easily distinguish a bot post from a human post. In this paper, we introduce a model for social media bot detection which uses Bidirectional Encoder Representations from Transformers (Google Bert) for sentiment classification of tweets to identify topic-independent features. Our use of a Natural Language Processing approach to derive topic-independent features for our new bot detection model distinguishes this work from previous bot detection models. We achieve 94\% accuracy classifying the contents of data as generated by a bot or a human, where the most accurate prior work achieved accuracy of 92\%.

Keywords: bot detection, natural language processing, neural network, social media

Procedia PDF Downloads 111
4458 Physical Activity and Cognitive Functioning Relationship in Children

Authors: Comfort Mokgothu

Abstract:

This study investigated the relation between processing information and fitness level of active (fit) and sedentary (unfit) children drawn from rural and urban areas in Botswana. It was hypothesized that fit children would display faster simple reaction time (SRT), choice reaction times (CRT) and movement times (SMT). 60, third grade children (7.0 – 9.0 years) were initially selected and based upon fitness testing, 45 participated in the study (15 each of fit urban, unfit urban, fit rural). All children completed anthropometric measures, skinfold testing and submaximal cycle ergometer testing. The cognitive testing included SRT, CRT, SMT and Choice Movement Time (CMT) and memory sequence length. Results indicated that the rural fit group exhibited faster SMT than the urban fit and unfit groups. For CRT, both fit groups were faster than the unfit group. Collectively, the study shows that the relationship that exists between physical fitness and cognitive function amongst the elderly can tentatively be extended to the pediatric population. Physical fitness could be a factor in the speed at which we process information, including decision making, even in children.

Keywords: decision making, fitness, information processing, reaction time, cognition movement time

Procedia PDF Downloads 138
4457 An Intelligent Nondestructive Testing System of Ultrasonic Infrared Thermal Imaging Based on Embedded Linux

Authors: Hao Mi, Ming Yang, Tian-yue Yang

Abstract:

Ultrasonic infrared nondestructive testing is a kind of testing method with high speed, accuracy and localization. However, there are still some problems, such as the detection requires manual real-time field judgment, the methods of result storage and viewing are still primitive. An intelligent non-destructive detection system based on embedded linux is put forward in this paper. The hardware part of the detection system is based on the ARM (Advanced Reduced Instruction Set Computer Machine) core and an embedded linux system is built to realize image processing and defect detection of thermal images. The CLAHE algorithm and the Butterworth filter are used to process the thermal image, and then the boa server and CGI (Common Gateway Interface) technology are used to transmit the test results to the display terminal through the network for real-time monitoring and remote monitoring. The system also liberates labor and eliminates the obstacle of manual judgment. According to the experiment result, the system provides a convenient and quick solution for industrial non-destructive testing.

Keywords: remote monitoring, non-destructive testing, embedded Linux system, image processing

Procedia PDF Downloads 217
4456 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka

Procedia PDF Downloads 387
4455 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 269
4454 An Efficient FPGA Realization of Fir Filter Using Distributed Arithmetic

Authors: M. Iruleswari, A. Jeyapaul Murugan

Abstract:

Most fundamental part used in many Digital Signal Processing (DSP) application is a Finite Impulse Response (FIR) filter because of its linear phase, stability and regular structure. Designing a high-speed and hardware efficient FIR filter is a very challenging task as the complexity increases with the filter order. In most applications the higher order filters are required but the memory usage of the filter increases exponentially with the order of the filter. Using multipliers occupy a large chip area and need high computation time. Multiplier-less memory-based techniques have gained popularity over past two decades due to their high throughput processing capability and reduced dynamic power consumption. This paper describes the design and implementation of highly efficient Look-Up Table (LUT) based circuit for the implementation of FIR filter using Distributed arithmetic algorithm. It is a multiplier less FIR filter. The LUT can be subdivided into a number of LUT to reduce the memory usage of the LUT for higher order filter. Analysis on the performance of various filter orders with different address length is done using Xilinx 14.5 synthesis tool. The proposed design provides less latency, less memory usage and high throughput.

Keywords: finite impulse response, distributed arithmetic, field programmable gate array, look-up table

Procedia PDF Downloads 453
4453 Listening Anxiety in Iranian EFL learners

Authors: Samaneh serraj

Abstract:

Listening anxiety has a detrimental effect on language learners. Through a qualitative study on Iranian EFL learners several factors were identified as having influence on their listening anxiety. These factors were divided into three categories, i.e. individual factors (nerves and emotionality, using inappropriate strategies and lack of practice), input factors (lack of time to process, lack of visual support, nature of speech and level of difficulty) and environmental factors (instructors, peers and class environment).

Keywords: listening Comprehension, Listening Anxiety, Foreign language learners

Procedia PDF Downloads 462
4452 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 523
4451 Design and Implementation of A 10-bit SAR ADC with A Programmable Reference

Authors: Hasmayadi Abdul Majid, Yuzman Yusoff, Noor Shelida Salleh

Abstract:

This paper presents the development of a single-ended 38.5 kS/s 10-bit programmable reference SAR ADC which is realized in MIMOS’s 0.35 µm CMOS process. The design uses a resistive DAC, a dynamic comparator with pre-amplifier and a SAR digital logic to create 10 effective bits ADC. A programmable reference circuitry allows the ADC to operate with different input range from 0.6 V to 2.1 V. A single ended 38.5 kS/s 10-bit programmable reference SAR ADC was proposed and implemented in a 0.35 µm CMOS technology and consumed less than 7.5 mW power with a 3 V supply.

Keywords: successive approximation register analog-to-digital converter, SAR ADC, resistive DAC, programmable reference

Procedia PDF Downloads 507
4450 Effect of Temperature and Deformation Mode on Texture Evolution of AA6061

Authors: M. Ghosh, A. Miroux, L. A. I. Kestens

Abstract:

At molecular or micrometre scale, practically all materials are neither homogeneous nor isotropic. The concept of texture is used to identify the structural features that cause the properties of a material to be anisotropic. For metallic materials, the anisotropy of the mechanical behaviour originates from the crystallographic nature of plastic deformation, and is therefore controlled by the crystallographic texture. Anisotropy in mechanical properties often constitutes a disadvantage in the application of materials, as it is often illustrated by the earing phenomena during drawing. However, advantages may also be attained when considering other properties (e.g. optimization of magnetic behaviour to a specific direction) by controlling texture through thermo-mechanical processing). Nevertheless, in order to have better control over the final properties it is essential to relate texture with materials processing route and subsequently optimise their performance. However, up to date, few studies have been reported about the evolution of texture in 6061 aluminium alloy during warm processing (from room temperature to 250ºC). In present investigation, recrystallized 6061 aluminium alloy samples were subjected to tensile and plane strain compression (PSC) at room and warm temperatures. The gradual change of texture following both deformation modes were measured and discussed. Tensile tests demonstrate the mechanism at low strain while PSC does the same at high strain and eventually simulate the condition of rolling. Cube dominated texture of the initial rolled and recrystallized AA6061 sheets were replaced by domination of S and R components after PSC at room temperature, warm temperature (250ºC) though did not reflect any noticeable deviation from room temperature observation. It was also noticed that temperature has no significant effect on the evolution of grain morphology during PSC. The band contrast map revealed that after 30% deformation the substructure inside the grain is mainly made of series of parallel bands. A tendency for decrease of Cube and increase of Goss was noticed after tensile deformation compared to as-received material. Like PSC, texture does not change after deformation at warm temperature though. n-fibre was noticed for all the three textures from Goss to Cube.

Keywords: AA 6061, deformation, temperature, tensile, PSC, texture

Procedia PDF Downloads 481
4449 Experimental Modeling of Spray and Water Sheet Formation Due to Wave Interactions with Vertical and Slant Bow-Shaped Model

Authors: Armin Bodaghkhani, Bruce Colbourne, Yuri S. Muzychka

Abstract:

The process of spray-cloud formation and flow kinematics produced from breaking wave impact on vertical and slant lab-scale bow-shaped models were experimentally investigated. Bubble Image Velocimetry (BIV) and Image Processing (IP) techniques were applied to study the various types of wave-model impacts. Different wave characteristics were generated in a tow tank to investigate the effects of wave characteristics, such as wave phase velocity, wave steepness on droplet velocities, and behavior of the process of spray cloud formation. The phase ensemble-averaged vertical velocity and turbulent intensity were computed. A high-speed camera and diffused LED backlights were utilized to capture images for further post processing. Various pressure sensors and capacitive wave probes were used to measure the wave impact pressure and the free surface profile at different locations of the model and wave-tank, respectively. Droplet sizes and velocities were measured using BIV and IP techniques to trace bubbles and droplets in order to measure their velocities and sizes by correlating the texture in these images. The impact pressure and droplet size distributions were compared to several previously experimental models, and satisfactory agreements were achieved. The distribution of droplets in front of both models are demonstrated. Due to the highly transient process of spray formation, the drag coefficient for several stages of this transient displacement for various droplet size ranges and different Reynolds number were calculated based on the ensemble average method. From the experimental results, the slant model produces less spray in comparison with the vertical model, and the droplet velocities generated from the wave impact with the slant model have a lower velocity as compared with the vertical model.

Keywords: spray charachteristics, droplet size and velocity, wave-body interactions, bubble image velocimetry, image processing

Procedia PDF Downloads 297
4448 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network

Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar

Abstract:

Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.

Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network

Procedia PDF Downloads 511
4447 Machine Learning Approach for Mutation Testing

Authors: Michael Stewart

Abstract:

Mutation testing is a type of software testing proposed in the 1970s where program statements are deliberately changed to introduce simple errors so that test cases can be validated to determine if they can detect the errors. Test cases are executed against the mutant code to determine if one fails, detects the error and ensures the program is correct. One major issue with this type of testing was it became intensive computationally to generate and test all possible mutations for complex programs. This paper used reinforcement learning and parallel processing within the context of mutation testing for the selection of mutation operators and test cases that reduced the computational cost of testing and improved test suite effectiveness. Experiments were conducted using sample programs to determine how well the reinforcement learning-based algorithm performed with one live mutation, multiple live mutations and no live mutations. The experiments, measured by mutation score, were used to update the algorithm and improved accuracy for predictions. The performance was then evaluated on multiple processor computers. With reinforcement learning, the mutation operators utilized were reduced by 50 – 100%.

Keywords: automated-testing, machine learning, mutation testing, parallel processing, reinforcement learning, software engineering, software testing

Procedia PDF Downloads 194
4446 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam

Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck

Abstract:

The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.

Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam

Procedia PDF Downloads 242
4445 Mage Fusion Based Eye Tumor Detection

Authors: Ahmed Ashit

Abstract:

Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image.

Keywords: image fusion, eye tumor, canny operators, superimposed

Procedia PDF Downloads 360
4444 Identification of EEG Attention Level Using Empirical Mode Decompositions for BCI Applications

Authors: Chia-Ju Peng, Shih-Jui Chen

Abstract:

This paper proposes a method to discriminate electroencephalogram (EEG) signals between different concentration states using empirical mode decomposition (EMD). Brain-computer interface (BCI), also called brain-machine interface, is a direct communication pathway between the brain and an external device without the inherent pathway such as the peripheral nervous system or skeletal muscles. Attention level is a common index as a control signal of BCI systems. The EEG signals acquired from people paying attention or in relaxation, respectively, are decomposed into a set of intrinsic mode functions (IMF) by EMD. Fast Fourier transform (FFT) analysis is then applied to each IMF to obtain the frequency spectrums. By observing power spectrums of IMFs, the proposed method has the better identification of EEG attention level than the original EEG signals between different concentration states. The band power of IMF3 is the most obvious especially in β wave, which corresponds to fully awake and generally alert. The signal processing method and results of this experiment paves a new way for BCI robotic system using the attention-level control strategy. The integrated signal processing method reveals appropriate information for discrimination of the attention and relaxation, contributing to a more enhanced BCI performance.

Keywords: biomedical engineering, brain computer interface, electroencephalography, rehabilitation

Procedia PDF Downloads 387
4443 Urban Flood Risk Mapping–a Review

Authors: Sherly M. A., Subhankar Karmakar, Terence Chan, Christian Rau

Abstract:

Floods are one of the most frequent natural disasters, causing widespread devastation, economic damage and threat to human lives. Hydrologic impacts of climate change and intensification of urbanization are two root causes of increased flood occurrences, and recent research trends are oriented towards understanding these aspects. Due to rapid urbanization, population of cities across the world has increased exponentially leading to improperly planned developments. Climate change due to natural and anthropogenic activities on our environment has resulted in spatiotemporal changes in rainfall patterns. The combined effect of both aggravates the vulnerability of urban populations to floods. In this context, an efficient and effective flood risk management with its core component as flood risk mapping is essential in prevention and mitigation of flood disasters. Urban flood risk mapping involves zoning of an urban region based on its flood risk, which depicts the spatiotemporal pattern of frequency and severity of hazards, exposure to hazards, and degree of vulnerability of the population in terms of socio-economic, environmental and infrastructural aspects. Although vulnerability is a key component of risk, its assessment and mapping is often less advanced than hazard mapping and quantification. A synergic effort from technical experts and social scientists is vital for the effectiveness of flood risk management programs. Despite an increasing volume of quality research conducted on urban flood risk, a comprehensive multidisciplinary approach towards flood risk mapping still remains neglected due to which many of the input parameters and definitions of flood risk concepts are imprecise. Thus, the objectives of this review are to introduce and precisely define the relevant input parameters, concepts and terms in urban flood risk mapping, along with its methodology, current status and limitations. The review also aims at providing thought-provoking insights to potential future researchers and flood management professionals.

Keywords: flood risk, flood hazard, flood vulnerability, flood modeling, urban flooding, urban flood risk mapping

Procedia PDF Downloads 584