Search results for: optimal digital signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10279

Search results for: optimal digital signal processing

9289 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers

Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha

Abstract:

This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.

Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV

Procedia PDF Downloads 560
9288 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength

Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma

Abstract:

The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.

Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.

Procedia PDF Downloads 58
9287 Additive Manufacturing Optimization Via Integrated Taguchi-Gray Relation Methodology for Oil and Gas Component Fabrication

Authors: Meshal Alsaiari

Abstract:

Fused Deposition Modeling is one of the additive manufacturing technologies the industry is shifting to nowadays due to its simplicity and low affordable cost. The fabrication processing parameters predominantly influence FDM part strength and mechanical properties. This presentation will demonstrate the influences of the two manufacturing parameters on the tensile testing evaluation indexes, infill density, and Printing Orientation, which were analyzed to create a piping spacer suitable for oil and gas applications. The tensile specimens are made of two polymers, Acrylonitrile Styrene Acrylate (ASA) and High high-impact polystyrene (HIPS), to characterize the mechanical properties performance for creating the final product. The mechanical testing was carried out per the ASTM D638 testing standard, following Type IV requirements. Taguchi's experiment design using an L-9 orthogonal array was used to evaluate the performance output and identify the optimal manufacturing factors. The experimental results demonstrate that the tensile test is more pronounced with 100% infill for ASA and HIPS samples. However, the printing orientations varied in reactions; ASA is maximum at 0 degrees while HIPS shows almost similar percentages between 45 and 90 degrees. Taguchi-Gray integrated methodology was adopted to minimize the response and recognize optimal fabrication factors combinations.

Keywords: FDM, ASTM D638, tensile testing, acrylonitrile styrene acrylate

Procedia PDF Downloads 95
9286 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 220
9285 Two-Warehouse Inventory Model for Deteriorating Items with Inventory-Level-Dependent Demand under Two Dispatching Policies

Authors: Lei Zhao, Zhe Yuan, Wenyue Kuang

Abstract:

This paper studies two-warehouse inventory models for a deteriorating item considering that the demand is influenced by inventory levels. The problem mainly focuses on the optimal order policy and the optimal order cycle with inventory-level-dependent demand in two-warehouse system for retailers. It considers the different deterioration rates and the inventory holding costs in owned warehouse (OW) and rented warehouse (RW), and the conditions of transportation cost, allowed shortage and partial backlogging. Two inventory models are formulated: last-in first-out (LIFO) model and first-in-first-out (FIFO) model based on the policy choices of LIFO and FIFO, and a comparative analysis of LIFO model and FIFO model is made. The study finds that the FIFO policy is more in line with realistic operating conditions. Especially when the inventory holding cost of OW is high, and there is no difference or big difference between deterioration rates of OW and RW, the FIFO policy has better applicability. Meanwhile, this paper considers the differences between the effects of warehouse and shelf inventory levels on demand, and then builds retailers’ inventory decision model and studies the factors of the optimal order quantity, the optimal order cycle and the average inventory cost per unit time. To minimize the average total cost, the optimal dispatching policies are provided for retailers’ decisions.

Keywords: FIFO model, inventory-level-dependent, LIFO model, two-warehouse inventory

Procedia PDF Downloads 279
9284 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 87
9283 Unpacking Chilean Preservice Teachers’ Beliefs on Practicum Experiences through Digital Stories

Authors: Claudio Díaz, Mabel Ortiz

Abstract:

An EFL teacher education programme in Chile takes five years to train a future teacher of English. Preservice teachers are prepared to learn an advanced level of English and teach the language from 5th to 12th grade in the Chilean educational system. In the context of their first EFL Methodology course in year four, preservice teachers have to create a five-minute digital story that starts from a critical incident they have experienced as teachers-to-be during their observations or interventions in the schools. A critical incident can be defined as a happening, a specific incident or event either observed by them or involving them. The happening sparks their thinking and may make them subsequently think differently about the particular event. When they create their digital stories, preservice teachers put technology, teaching practice and theory together to narrate a story that is complemented by still images, moving images, text, sound effects and music. The story should be told as a personal narrative, which explains the critical incident. This presentation will focus on the creation process of 50 Chilean preservice teachers’ digital stories highlighting the critical incidents they started their stories. It will also unpack preservice teachers’ beliefs and reflections when approaching their teaching practices in schools. These beliefs will be coded and categorized through content analysis to evidence preservice teachers’ most rooted conceptions about English teaching and learning in Chilean schools. The findings seem to indicate that preservice teachers’ beliefs are strongly mediated by contextual and affective factors.

Keywords: beliefs, digital stories, preservice teachers, practicum

Procedia PDF Downloads 442
9282 Analysis and Improvement of Efficiency for Food Processing Assembly Lines

Authors: Mehmet Savsar

Abstract:

Several factors affect productivity of Food Processing Assembly Lines (FPAL). Engineers and line managers usually do not recognize some of these factors and underutilize their production/assembly lines. In this paper, a special food processing assembly line is studied in detail, and procedures are presented to illustrate how productivity and efficiency of such lines can be increased. The assembly line considered produces ten different types of freshly prepared salads on the same line, which is called mixed model assembly line. Problems causing delays and inefficiencies on the line are identified. Line balancing and related tools are used to increase line efficiency and minimize balance delays. The procedure and the approach utilized in this paper can be useful for the operation managers and industrial engineers dealing with similar assembly lines in food processing industry.

Keywords: assembly lines, line balancing, production efficiency, bottleneck

Procedia PDF Downloads 389
9281 Epileptic Seizure Prediction Focusing on Relative Change in Consecutive Segments of EEG Signal

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

Epilepsy is a common neurological disorders characterized by sudden recurrent seizures. Electroencephalogram (EEG) is widely used to diagnose possible epileptic seizure. Many research works have been devoted to predict epileptic seizure by analyzing EEG signal. Seizure prediction by analyzing EEG signals are challenging task due to variations of brain signals of different patients. In this paper, we propose a new approach for feature extraction based on phase correlation in EEG signals. In phase correlation, we calculate relative change between two consecutive segments of an EEG signal and then combine the changes with neighboring signals to extract features. These features are then used to classify preictal/ictal and interictal EEG signals for seizure prediction. Experiment results show that the proposed method carries good prediction rate with greater consistence for the benchmark data set in different brain locations compared to the existing state-of-the-art methods.

Keywords: EEG, epilepsy, phase correlation, seizure

Procedia PDF Downloads 309
9280 Optimal Maintenance and Improvement Policies in Water Distribution System: Markov Decision Process Approach

Authors: Jong Woo Kim, Go Bong Choi, Sang Hwan Son, Dae Shik Kim, Jung Chul Suh, Jong Min Lee

Abstract:

The Markov Decision Process (MDP) based methodology is implemented in order to establish the optimal schedule which minimizes the cost. Formulation of MDP problem is presented using the information about the current state of pipe, improvement cost, failure cost and pipe deterioration model. The objective function and detailed algorithm of dynamic programming (DP) are modified due to the difficulty of implementing the conventional DP approaches. The optimal schedule derived from suggested model is compared to several policies via Monte Carlo simulation. Validity of the solution and improvement in computational time are proved.

Keywords: Markov decision processes, dynamic programming, Monte Carlo simulation, periodic replacement, Weibull distribution

Procedia PDF Downloads 424
9279 Data-Focused Digital Transformation for Smart Net-Zero Cities: A Systems Thinking Approach

Authors: Farzaneh Mohammadi Jouzdani, Vahid Javidroozi, Monica Mateo Garcia, Hanifa Shah

Abstract:

The emergence of developing smart net-zero cities in recent years has attracted significant attention and interest from worldwide communities and scholars as a potential solution to the critical requirement for urban sustainability. This research-in-progress paper aims to investigate the development of smart net-zero cities to propose a digital transformation roadmap for smart net-zero cities with a primary focus on data. Employing systems thinking as an underpinning theory, the study advocates for the necessity of utilising a holistic strategy for understanding the complex interdependencies and interrelationships that characterise urban systems. The proposed methodology will involve an in-depth investigation of current data-driven approaches in the smart net-zero city. This is followed by utilising predictive analysis methods to evaluate the holistic impact of the approaches on moving toward a Smart net-zero city. It is expected to achieve systemic intervention followed by a data-focused and systemic digital transformation roadmap for smart net-zero, contributing to a more holistic understanding of urban sustainability.

Keywords: smart city, net-zero city, digital transformation, systems thinking, data integration, data-driven approach

Procedia PDF Downloads 25
9278 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification

Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou

Abstract:

The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.

Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms

Procedia PDF Downloads 249
9277 Solving Optimal Control of Semilinear Elliptic Variational Inequalities Obstacle Problems using Smoothing Functions

Authors: El Hassene Osmani, Mounir Haddou, Naceurdine Bensalem

Abstract:

In this paper, we investigate optimal control problems governed by semilinear elliptic variational inequalities involving constraints on the state, and more precisely, the obstacle problem. We present a relaxed formulation for the problem using smoothing functions. Since we adopt a numerical point of view, we first relax the feasible domain of the problem, then using both mathematical programming methods and penalization methods, we get optimality conditions with smooth Lagrange multipliers. Some numerical experiments using IPOPT algorithm (Interior Point Optimizer) are presented to verify the efficiency of our approach.

Keywords: complementarity problem, IPOPT, Lagrange multipliers, mathematical programming, optimal control, smoothing methods, variationally inequalities

Procedia PDF Downloads 174
9276 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age

Authors: S. N. Susenburger, D. Boecker

Abstract:

In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.

Keywords: innovation management, innovation culture, innovation methodologies, digital transformation

Procedia PDF Downloads 149
9275 Performance of an Improved Fluidized System for Processing Green Tea

Authors: Nickson Kipng’etich Lang’at, Thomas Thoruwa, John Abraham, John Wanyoko

Abstract:

Green tea is made from the top two leaves and buds of a shrub, Camellia sinensis, of the family Theaceae and the order Theales. The green tea leaves are picked and immediately sent to be dried or steamed to prevent fermentation. Fluid bed drying technique is a common drying method used in drying green tea because of its ease in design and construction and fluidization of fine tea particles. Major problems in this method are significant loss of chemical content of the leaf and green appearance of tea, retention of high moisture content in the leaves and bed channeling and defluidization. The energy associated with the drying technology has been shown to be a vital factor in determining the quality of green tea. As part of the implementation, prototype dryer was built that facilitated sequence of operations involving steaming, cooling, pre-drying and final drying. The major findings of the project were in terms of quality characteristics of tea leaves and energy consumption during processing. The optimal design achieved a moisture content of 4.2 ± 0.84%. With the optimum drying temperature of 100 ºC, the specific energy consumption was 1697.8 kj.Kg-1 and evaporation rate of 4.272 x 10-4 Kg.m-2.s-1. The energy consumption in a fluidized system can be further reduced by focusing on energy saving designs.

Keywords: evaporation rate, fluid bed dryer, maceration, specific energy consumption

Procedia PDF Downloads 314
9274 A Conceptual Framework for Integrating Musical Instrument Digital Interface Composition in the Music Classroom

Authors: Aditi Kashi

Abstract:

While educational technologies have taken great strides, especially in Musical Instrument Digital Interface (MIDI) composition, teachers across the world are still adjusting to incorporate such technology into their curricula. While using MIDI in the classroom has become more common, limited class time and a strong focus on performance have made composition a lesser priority. The balance between music theory, performance time, and composition learning is delicate and difficult to maintain for many music educators. This makes including MIDI in the classroom. To address this issue, this paper aims to outline a general conceptual framework centered around a key element of music theory to integrate MIDI composition into the music classroom to not only introduce students to digital composition but also enhance their understanding of music theory and its applicability.

Keywords: educational framework, education technology, MIDI, music education

Procedia PDF Downloads 87
9273 Digital Games as a Means of Cultural Communication and Heritage Tourism: A Study on Black Myth-Wukong

Authors: Kung Wong Lau

Abstract:

On August 20, 2024, the global launch of the Wukong game generated significant enthusiasm within the gaming community. This game provides gamers with an immersive experience and some digital twins (the location) that effectively bridge cultural heritage and contemporary gaming, thereby facilitating heritage tourism to some extent. Travel websites highlight locations featured in the Wukong game, encouraging visitors to explore these sites. However, this area remains underexplored in cultural and communication studies, both locally and internationally. This pilot study aims to explore the potential of in-game cultural communication in Wukong for promoting Chinese culture and heritage tourism. An exploratory research methodology was employed, utilizing a focus group of non-Chinese active gamers on an online discussion platform. The findings suggest that the use of digital twins as a means to facilitate cultural communication and heritage tourism for non-Chinese gamers shows promise. While this pilot study cannot generalize its findings due to the limited number of participants, the insights gained could inform further discussions on the influential factors of cultural communication through gaming.

Keywords: digital game, game culture, heritage tourism, cultural communication, non-Chinese gamers

Procedia PDF Downloads 24
9272 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 310
9271 The Use of Ontology Framework for Automation Digital Forensics Investigation

Authors: Ahmad Luthfi

Abstract:

One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as ontology framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely analysis tools and operating system. By using the concept of ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing forward chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the ontology.

Keywords: ontology, framework, automation, forensics

Procedia PDF Downloads 342
9270 A New Approach to the Digital Implementation of Analog Controllers for a Power System Control

Authors: G. Shabib, Esam H. Abd-Elhameed, G. Magdy

Abstract:

In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.

Keywords: PSS, power system stabilizer PID, proportional-integral-derivative PIM, plant input mapping

Procedia PDF Downloads 507
9269 Optimal Production and Maintenance Policy for a Partially Observable Production System with Stochastic Demand

Authors: Leila Jafari, Viliam Makis

Abstract:

In this paper, the joint optimization of the economic manufacturing quantity (EMQ), safety stock level, and condition-based maintenance (CBM) is presented for a partially observable, deteriorating system subject to random failure. The demand is stochastic and it is described by a Poisson process. The stochastic model is developed and the optimization problem is formulated in the semi-Markov decision process framework. A modification of the policy iteration algorithm is developed to find the optimal policy. A numerical example is presented to compare the optimal policy with the policy considering zero safety stock.

Keywords: condition-based maintenance, economic manufacturing quantity, safety stock, stochastic demand

Procedia PDF Downloads 465
9268 Reconfigurable Device for 3D Visualization of Three Dimensional Surfaces

Authors: Robson da C. Santos, Carlos Henrique de A. S. P. Coutinho, Lucas Moreira Dias, Gerson Gomes Cunha

Abstract:

The article refers to the development of an augmented reality 3D display, through the control of servo motors and projection of image with aid of video projector on the model. Augmented Reality is a branch that explores multiple approaches to increase real-world view by viewing additional information along with the real scene. The article presents the broad use of electrical, electronic, mechanical and industrial automation for geospatial visualizations, applications in mathematical models with the visualization of functions and 3D surface graphics and volumetric rendering that are currently seen in 2D layers. Application as a 3D display for representation and visualization of Digital Terrain Model (DTM) and Digital Surface Models (DSM), where it can be applied in the identification of canyons in the marine area of the Campos Basin, Rio de Janeiro, Brazil. The same can execute visualization of regions subject to landslides, as in Serra do Mar - Agra dos Reis and Serranas cities both in the State of Rio de Janeiro. From the foregoing, loss of human life and leakage of oil from pipelines buried in these regions may be anticipated in advance. The physical design consists of a table consisting of a 9 x 16 matrix of servo motors, totalizing 144 servos, a mesh is used on the servo motors for visualization of the models projected by a retro projector. Each model for by an image pre-processing, is sent to a server to be converted and viewed from a software developed in C # Programming Language.

Keywords: visualization, 3D models, servo motors, C# programming language

Procedia PDF Downloads 342
9267 Design of Wide-Range Variable Fractional-Delay FIR Digital Filters

Authors: Jong-Jy Shyu, Soo-Chang Pei, Yun-Da Huang

Abstract:

In this paper, design of wide-range variable fractional-delay (WR-VFD) finite impulse response (FIR) digital filters is proposed. With respect to the conventional VFD filter which is designed such that its delay is adjustable within one unit, the proposed VFD FIR filter is designed such that its delay can be tunable within a wider range. By the traces of coefficients of the fractional-delay FIR filter, it is found that the conventional method of polynomial substitution for filter coefficients no longer satisfies the design demand, and the circuits perform the sinc function (sinc converter) are added to overcome this problem. In this paper, least-squares method is adopted to design WR-VFD FIR filter. Throughout this paper, several examples will be proposed to demonstrate the effectiveness of the presented methods.

Keywords: digital filter, FIR filter, variable fractional-delay (VFD) filter, least-squares approximation

Procedia PDF Downloads 492
9266 Molecular Communication Noise Effect Analysis of Diffusion-Based Channel for Considering Minimum-Shift Keying and Molecular Shift Keying Modulations

Authors: A. Azari, S. S. K. Seyyedi

Abstract:

One of the unaddressed and open challenges in the nano-networking is the characteristics of noise. The previous analysis, however, has concentrated on end-to-end communication model with no separate modelings for propagation channel and noise. By considering a separate signal propagation and noise model, the design and implementation of an optimum receiver will be much easier. In this paper, we justify consideration of a separate additive Gaussian noise model of a nano-communication system based on the molecular communication channel for which are applicable for MSK and MOSK modulation schemes. The presented noise analysis is based on the Brownian motion process, and advection molecular statistics, where the received random signal has a probability density function whose mean is equal to the mean number of the received molecules. Finally, the justification of received signal magnitude being uncorrelated with additive non-stationary white noise is provided.

Keywords: molecular, noise, diffusion, channel

Procedia PDF Downloads 280
9265 Police Violence, Activism, and the Changing Rural United States: A Digital History and Mapping Narrative

Authors: Joel Zapata

Abstract:

Chicana/o Activism in the Southern Plains Through Time and Space, a digital history project available at PlainsMovement.com, helps reveal an understudied portion of the Chicana/o Civil Rights Movement: the way it unfolded on the Southern Plains. The project centers around an approachable interactive map and timeline along with a curated collection of materials. Therefore, the project provides a digital museum experience that has not emerged within the region’s museums. That is, this digital history project takes scholarly research to the wider public, making it is also a publicly facing history project. In this way, the project adds to both scholarly and socially significant conversations, showing that the region was home to a burgeoning wing of the Chicana/o Movement and that instances of police brutality largely spurred this wing of the social justice movement. Moreover, the curated collection of materials demonstrates that police brutality united the plains’ Mexican population across political ideology, a largely overlooked aspect within the study of Mexican American civil rights movements. Such a finding can be of use today since contemporary Latina/o social justice organizations generally ignore policing issues even amid a rise in national awareness regarding police abuse. In making history accessible to Mexican origin and Latina/o communities, these same communities may in-turn use the knowledge gained from historical research towards the betterment of their social positions—the foundational goal of Chicana/o history and the related field of Chicana/o Studies. Ultimately, this digital history project is intended to draw visitors to further explore the Chicana/o Civil Rights Movement within and beyond the plains.

Keywords: Chicana/o Movement, digital history, police brutality, newspapers, protests, student activism

Procedia PDF Downloads 123
9264 Measuring Digital Literacy in the Chilean Workforce

Authors: Carolina Busco, Daniela Osses

Abstract:

The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.

Keywords: digital literacy, digital society, workforce digitalization, digital skills

Procedia PDF Downloads 67
9263 Development of a Real Time Axial Force Measurement System and IoT-Based Monitoring for Smart Bearing

Authors: Hassam Ahmed, Yuanzhi Liu, Yassine Selami, Wei Tao, Hui Zhao

Abstract:

The purpose of this research is to develop a real time axial force measurement system for a smart bearing through the use of strain-gauges, whereby the data acquisition is performed by an Arduino microcontroller due to its easy manipulation and low-cost. The measured signal is acquired and then discretized using a Wheatstone Bridge and an Analog-Digital Converter (ADC) respectively. For bearing monitoring, a real time monitoring system based on Internet of things (IoT) and Bluetooth were developed. Experimental tests were performed on a bearing within a force range up to 600 kN. The experimental results show that there is a proportional linear relationship between the applied force and the output voltage, and the error R squared is within 0.9878 based on the regression analysis.

Keywords: bearing, force measurement, IoT, strain gauge

Procedia PDF Downloads 144
9262 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals

Procedia PDF Downloads 380
9261 The Impact of Artificial Intelligence on Food Nutrition

Authors: Antonyous Fawzy Boshra Girgis

Abstract:

Nutrition labels are diet-related health policies. They help individuals improve food-choice decisions and reduce intake of calories and unhealthy food elements, like cholesterol. However, many individuals do not pay attention to nutrition labels or fail to appropriately understand them. According to the literature, thinking and cognitive styles can have significant effects on attention to nutrition labels. According to the author's knowledge, the effect of global/local processing on attention to nutrition labels has not been previously studied. Global/local processing encourages individuals to attend to the whole/specific parts of an object and can have a significant impact on people's visual attention. In this study, this effect was examined with an experimental design using the eye-tracking technique. The research hypothesis was that individuals with local processing would pay more attention to nutrition labels, including nutrition tables and traffic lights. An experiment was designed with two conditions: global and local information processing. Forty participants were randomly assigned to either global or local conditions, and their processing style was manipulated accordingly. Results supported the hypothesis for nutrition tables but not for traffic lights.

Keywords: nutrition, public health, SA Harvest, foodeye-tracking, nutrition labelling, global/local information processing, individual differencesmobile computing, cloud computing, nutrition label use, nutrition management, barcode scanning

Procedia PDF Downloads 43
9260 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 241