Search results for: processing based on signal identification
31722 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 40931721 A Non-Destructive TeraHertz System and Method for Capsule and Liquid Medicine Identification
Authors: Ke Lin, Steve Wu Qing Yang, Zhang Nan
Abstract:
The medicine and drugs has in the past been manufactured to the final products and then used laboratory analysis to verify their quality. However the industry needs crucially a monitoring technique for the final batch to batch quality check. The introduction of process analytical technology (PAT) provides an incentive to obtain real-time information about drugs on the production line, with the following optical techniques being considered: near-infrared (NIR) spectroscopy, Raman spectroscopy and imaging, mid-infrared spectroscopy with the use of chemometric techniques to quantify the final product. However, presents problems in that the spectra obtained will consist of many combination and overtone bands of the fundamental vibrations observed, making analysis difficult. In this work, we describe a non-destructive system and method for capsule and liquid medicine identification, more particularly, using terahertz time-domain spectroscopy and/or designed terahertz portable system for identifying different types of medicine in the package of capsule or in liquid medicine bottles. The target medicine can be detected directly, non-destructively and non-invasively.Keywords: terahertz, non-destructive, non-invasive, chemical identification
Procedia PDF Downloads 13131720 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 4631719 Numerical Simulation and Laboratory Tests for Rebar Detection in Reinforced Concrete Structures using Ground Penetrating Radar
Authors: Maha Al-Soudani, Gilles Klysz, Jean-Paul Balayssac
Abstract:
The aim of this paper is to use Ground Penetrating Radar (GPR) as a non-destructive testing (NDT) method to increase its accuracy in recognizing the geometric reinforced concrete structures and in particular, the position of steel bars. This definition will help the managers to assess the state of their structures on the one hand vis-a-vis security constraints and secondly to quantify the need for maintenance and repair. Several configurations of acquisition and processing of the simulated signal were tested to propose and develop an appropriate imaging algorithm in the propagation medium to locate accurately the rebar. A subsequent experimental validation was used by testing the imaging algorithm on real reinforced concrete structures. The results indicate that, this algorithm is capable of estimating the reinforcing steel bar position to within (0-1) mm.Keywords: GPR, NDT, Reinforced concrete structures, Rebar location.
Procedia PDF Downloads 50431718 Valence and Arousal-Based Sentiment Analysis: A Comparative Study
Authors: Usama Shahid, Muhammad Zunnurain Hussain
Abstract:
This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining
Procedia PDF Downloads 10131717 Statistical Tools for SFRA Diagnosis in Power Transformers
Authors: Rahul Srivastava, Priti Pundir, Y. R. Sood, Rajnish Shrivastava
Abstract:
For the interpretation of the signatures of sweep frequency response analysis(SFRA) of transformer different types of statistical techniques serves as an effective tool for doing either phase to phase comparison or sister unit comparison. In this paper with the discussion on SFRA several statistics techniques like cross correlation coefficient (CCF), root square error (RSQ), comparative standard deviation (CSD), Absolute difference, mean square error(MSE),Min-Max ratio(MM) are presented through several case studies. These methods require sample data size and spot frequencies of SFRA signatures that are being compared. The techniques used are based on power signal processing tools that can simplify result and limits can be created for the severity of the fault occurring in the transformer due to several short circuit forces or due to ageing. The advantages of using statistics techniques for analyzing of SFRA result are being indicated through several case studies and hence the results are obtained which determines the state of the transformer.Keywords: absolute difference (DABS), cross correlation coefficient (CCF), mean square error (MSE), min-max ratio (MM-ratio), root square error (RSQ), standard deviation (CSD), sweep frequency response analysis (SFRA)
Procedia PDF Downloads 69731716 Model-Based Fault Diagnosis in Carbon Fiber Reinforced Composites Using Particle Filtering
Abstract:
Carbon fiber reinforced composites (CFRP) used as aircraft structure are subject to lightning strike, putting structural integrity under risk. Indirect damage may occur after a lightning strike where the internal structure can be damaged due to excessive heat induced by lightning current, while the surface of the structures remains intact. Three damage modes may be observed after a lightning strike: fiber breakage, inter-ply delamination and intra-ply cracks. The assessment of internal damage states in composite is challenging due to complicated microstructure, inherent uncertainties, and existence of multiple damage modes. In this work, a model based approach is adopted to diagnose faults in carbon composites after lighting strikes. A resistor network model is implemented to relate the overall electrical and thermal conduction behavior under simulated lightning current waveform to the intrinsic temperature dependent material properties, microstructure and degradation of materials. A fault detection and identification (FDI) module utilizes the physics based model and a particle filtering algorithm to identify damage mode as well as calculate the probability of structural failure. Extensive simulation results are provided to substantiate the proposed fault diagnosis methodology with both single fault and multiple faults cases. The approach is also demonstrated on transient resistance data collected from a IM7/Epoxy laminate under simulated lightning strike.Keywords: carbon composite, fault detection, fault identification, particle filter
Procedia PDF Downloads 19531715 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 24431714 Combined Synchrotron Radiography and Diffraction for in Situ Study of Reactive Infiltration of Aluminum into Iron Porous Preform
Authors: S. Djaziri, F. Sket, A. Hynowska, S. Milenkovic
Abstract:
The use of Fe-Al based intermetallics as an alternative to Cr/Ni based stainless steels is very promising for industrial applications that use critical raw materials parts under extreme conditions. However, the development of advanced Fe-Al based intermetallics with appropriate mechanical properties presents several challenges that involve appropriate processing and microstructure control. A processing strategy is being developed which aims at producing a net-shape porous Fe-based preform that is infiltrated with molten Al or Al-alloy. In the present work, porous Fe-based preforms produced by two different methods (selective laser melting (SLM) and Kochanek-process (KE)) are studied during infiltration with molten aluminum. In the objective to elucidate the mechanisms underlying the formation of Fe-Al intermetallic phases during infiltration, an in-house furnace has been designed for in situ observation of infiltration at synchrotron facilities combining x-ray radiography (XR) and x-ray diffraction (XRD) techniques. The feasibility of this approach has been demonstrated, and information about the melt flow front propagation has been obtained. In addition, reactive infiltration has been achieved where a bi-phased intermetallic layer has been identified to be formed between the solid Fe and liquid Al. In particular, a tongue-like Fe₂Al₅ phase adhering to the Fe and a needle-like Fe₄Al₁₃ phase adhering to the Al were observed. The growth of the intermetallic compound was found to be dependent on the temperature gradient present along the preform as well as on the reaction time which will be discussed in view of the different obtained results.Keywords: combined synchrotron radiography and diffraction, Fe-Al intermetallic compounds, in-situ molten Al infiltration, porous solid Fe preforms
Procedia PDF Downloads 22631713 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas
Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman
Abstract:
This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.Keywords: doppler radar, FMCW, range detection, speed detection
Procedia PDF Downloads 39831712 Rapid Identification of Thermophilic Campylobacter Species from Retail Poultry Meat Using Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry
Authors: Graziella Ziino, Filippo Giarratana, Stefania Maria Marotta, Alessandro Giuffrida, Antonio Panebianco
Abstract:
In Europe, North America and Japan, campylobacteriosis is one of the leading food-borne bacterial illnesses, often related to the consumption of poultry meats and/or by-products. The aim of this study was the evaluation of Campylobacter contamination of poultry meats marketed in Sicily (Italy) using both traditional methods and Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS). MALDI-TOF MS is considered a promising rapid (less than 1 hour) identification method for food borne pathogens bacteria. One hundred chicken and turkey meat preparations (no. 68 hamburgers, no. 21 raw sausages, no. 4 meatballs and no. 7 meat rolls) were taken from different butcher’s shops and large scale retailers and submitted to detection/enumeration of Campylobacter spp. according to EN ISO 10272-1:2006 and EN ISO 10272-2:2006. Campylobacter spp. was detected with general low counts in 44 samples (44%), of which 30 from large scale retailers and 14 from butcher’s shops. Chicken meats were significantly more contaminated than turkey meats. Among the preparations, Campylobacter spp. was found in 85.71% of meat rolls, 50% of meatballs, 44.12% of hamburgers and 28.57% of raw sausages. A total of 100 strains, 2-3 from each positive samples, were isolated for the identification by phenotypic, biomolecular and MALDI-TOF MS methods. C. jejuni was the predominant strains (63%), followed by C. coli (33%) and C. lari (4%). MALDI-TOF MS correctly identified 98% of the strains at the species level, only 1% of the tested strains were not identified. In the last 1%, a mixture of two different species was mixed in the same sample and MALDI-TOF MS correctly identified at least one of the strains. Considering the importance of rapid identification of pathogens in the food matrix, this method is highly recommended for the identification of suspected colonies of Campylobacteria.Keywords: campylobacter spp., Food Microbiology, matrix-assisted laser desorption ionization-time of flight mass spectrometry, rapid microbial identification
Procedia PDF Downloads 29231711 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 13831710 The Influence of Concreteness on English Compound Noun Processing: Modulation of Constituent Transparency
Authors: Turgut Coskun
Abstract:
'Concreteness effect' refers to faster processing of concrete words and 'compound facilitation' refers to faster response to compounds. In this study, our main goal was to investigate the interaction between compound facilitation and concreteness effect. The latter might modulate compound processing basing on constituents’ transparency patterns. To evaluate these, we created lists for compound and monomorphemic words, sub-categorized them into concrete and abstract words, and further sub-categorized them basing on their transparency. The transparency conditions were opaque-opaque (OO), transparent-opaque (TO), and transparent-transparent (TT). We used RT data from English Lexicon Project (ELP) for our comparisons. The results showed the importance of concreteness factor (facilitation) in both compound and monomorphemic processing. Important for our present concern, separate concrete and abstract compound analyses revealed different patterns for OO, TO, and TT compounds. Concrete TT and TO conditions were processed faster than Concrete OO, Abstract OO and Abstract TT compounds, however, they weren’t processed faster than Abstract TO compounds. These results may reflect on different representation patterns of concrete and abstract compounds.Keywords: abstract word, compound representation, concrete word, constituent transparency, processing speed
Procedia PDF Downloads 19831709 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals
Authors: Naser Safdarian, Nader Jafarnia Dabanloo
Abstract:
In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition
Procedia PDF Downloads 45631708 Reactive Learning about Food Waste Reduction in a Food Processing Plant in Gauteng Province, South Africa
Authors: Nesengani Elelwani Clinton
Abstract:
This paper presents reflective learning as an opportunity commonly available and used for food waste learning in a food processing company in the transition to sustainable and just food systems. In addressing how employees learn about food waste during food processing, the opportunities available for food waste learning were investigated. Reflective learning appeared to be the most used approach to learning about food waste. In the case of food waste learning, reflective learning was a response after employees wasted a substantial amount of food, where process controllers and team leaders would highlight the issue to employees who wasted food and explain how food waste could be reduced. This showed that learning about food waste is not proactive, and there continues to be a lack of structured learning around food waste. Several challenges were highlighted around reflective learning about food waste. Some of the challenges included understanding the language, lack of interest from employees, set times to reach production targets, and working pressures. These challenges were reported to be hindering factors in understanding food waste learning, which is not structured. A need was identified for proactive learning through structured methods. This is because it was discovered that in the plant, where food processing activities happen, the signage and posters that are there are directly related to other sustainability issues such as food safety and health. This indicated that there are low levels of awareness about food waste. Therefore, this paper argues that food waste learning should be proactive. The proactive learning approach should include structured learning materials around food waste during food processing. In the structuring of the learning materials, individual trainers should be multilingual. This will make it possible for those who do not understand English to understand in their own language. And lastly, there should be signage and posters in the food processing plant around food waste. This will bring more awareness around food waste, and employees' behaviour can be influenced by the posters and signage in the food processing plant. Thus, will enable a transition to a just and sustainable food system.Keywords: sustainable and just food systems, food waste, food waste learning, reflective learning approach
Procedia PDF Downloads 13031707 Early Identification and Early Intervention: Pre and Post Diagnostic Tests in Mathematics Courses
Authors: Kailash Ghimire, Manoj Thapa
Abstract:
This study focuses on early identification of deficiencies in pre-required areas of students who are enrolled in College Algebra and Calculus I classes. The students were given pre-diagnostic tests on the first day of the class before they are provided with the syllabus. The tests consist of prerequisite, uniform and advanced content outlined by the University System of Georgia (USG). The results show that 48% of students in College Algebra are lacking prerequisite skills while 52% of Calculus I students are lacking prerequisite skills but, interestingly these students are prior exposed to uniform content and advanced content. The study is still in progress and this paper contains the outcome from Fall 2017 and Spring 2018. In this paper, early intervention used in these classes: two days vs three days meeting a week and students’ self-assessment using exam wrappers and their effectiveness on students’ learning will also be discussed. A result of this study shows that there is an improvement on Drop, Fail and Withdraw (DFW) rates by 7%-10% compared to those in previous semesters.Keywords: student at risk, diagnostic tests, identification, intervention, normalization gain, validity of tests
Procedia PDF Downloads 20831706 Research on the Cognition and Actual Phenomenon of School Bullying from the Perspective of Students
Authors: Chia-Chun Wu, Yu-Hsien Sung
Abstract:
This study aims to examine the consistency between students’ predictions and their actual observations on the bullying prevalence rate among different types of high-risk victims, thereby clarifying the reliability of students’ reports on the identification of bullying. A total of 1,732 Taiwanese students (734 males and 998 females) participated in this study. A Rasch model was adopted for data analysis. The results showed that students with “personality or behavioral issues” are more likely to be bullied in schools, based on both students’ predictions and actual observations. Moreover, the results differed significantly between genders and between various educational levels in students’ predictions and their actual observations on the bullying prevalence rate of different types of high-risk victims. To summarize, this study not only suggests that students’ reports on the identification of bullying are accurate and could be a valuable reference in terms of recognizing a bullying incident, but it also argues that more attention should be paid to students’ gender and educational level when taking their perspectives into consideration when it comes to identifying bullying behaviors.Keywords: school bullying, student, bullying recognition, high-risk victims
Procedia PDF Downloads 8431705 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD
Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio
Procedia PDF Downloads 37131704 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device
Authors: Pradakshina Sharma, Jagriti Narang
Abstract:
Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.Keywords: biosensors, ePAD, arboviral infections, point of care
Procedia PDF Downloads 9831703 Signals Affecting Crowdfunding Success for Australian Social Enterprises
Authors: Mai Yen Nhi Doan, Viet Le, Chamindika Weerakoon
Abstract:
Social enterprises have emerged as sustainable organisations that deliver social achievement along with long-term financial advancement. However, recorded financial barriers have urged social enterprises to divert to other financing methods due to the misaligned ideology with traditional financing capitalists, in which crowdfunding can be a promising alternative. Previous studies in crowdfunding have inadequately addressed crowdfunding for social enterprises, with conflicting results due to the unsuitable analysis of signals in isolation rather than in combinations, using the data from platforms that do not support social enterprises. Extending the signalling theory, this study suggests that crowdfunding success results from the collaboration between costly and costless signals. The proposed conceptual framework enlightens the interaction between costly signals as “organisational information”, “social entrepreneur’s credibility,” and “third-party endorsement” and costless signals as various sub-signals under the “campaign preparedness” signal to achieve crowdfunding success. Using Qualitative Comparative Analysis, this study examined 45 crowdfunding campaigns run by Australian social enterprises on StartSomeGood and Chuffed. The analysis found that different combinations of costly and costless signals can lead to crowdfunding success, allowing social enterprises to adopt suitable combinations of signals to their context. Costless signal – campaign preparedness is fundamental for success, though different costless sub-signals under campaign preparedness can interact with different costly signals for the desired outcome. Third-party endorsement signal was found to be the necessary signal for crowdfunding success for Australian social enterprises.Keywords: crowdfunding, qualitative comparative analysis (QCA), signalling theory, social enterprises
Procedia PDF Downloads 10331702 Image Processing of Scanning Electron Microscope Micrograph of Ferrite and Pearlite Steel for Recognition of Micro-Constituents
Authors: Subir Gupta, Subhas Ganguly
Abstract:
In this paper, we demonstrate the new area of application of image processing in metallurgical images to develop the more opportunity for structure-property correlation based approaches of alloy design. The present exercise focuses on the development of image processing tools suitable for phrase segmentation, grain boundary detection and recognition of micro-constituents in SEM micrographs of ferrite and pearlite steels. A comprehensive data of micrographs have been experimentally developed encompassing the variation of ferrite and pearlite volume fractions and taking images at different magnification (500X, 1000X, 15000X, 2000X, 3000X and 5000X) under scanning electron microscope. The variation in the volume fraction has been achieved using four different plain carbon steel containing 0.1, 0.22, 0.35 and 0.48 wt% C heat treated under annealing and normalizing treatments. The obtained data pool of micrographs arbitrarily divided into two parts to developing training and testing sets of micrographs. The statistical recognition features for ferrite and pearlite constituents have been developed by learning from training set of micrographs. The obtained features for microstructure pattern recognition are applied to test set of micrographs. The analysis of the result shows that the developed strategy can successfully detect the micro constitutes across the wide range of magnification and variation of volume fractions of the constituents in the structure with an accuracy of about +/- 5%.Keywords: SEM micrograph, metallurgical image processing, ferrite pearlite steel, microstructure
Procedia PDF Downloads 19931701 Multi-Class Text Classification Using Ensembles of Classifiers
Authors: Syed Basit Ali Shah Bukhari, Yan Qiang, Saad Abdul Rauf, Syed Saqlaina Bukhari
Abstract:
Text Classification is the methodology to classify any given text into the respective category from a given set of categories. It is highly important and vital to use proper set of pre-processing , feature selection and classification techniques to achieve this purpose. In this paper we have used different ensemble techniques along with variance in feature selection parameters to see the change in overall accuracy of the result and also on some other individual class based features which include precision value of each individual category of the text. After subjecting our data through pre-processing and feature selection techniques , different individual classifiers were tested first and after that classifiers were combined to form ensembles to increase their accuracy. Later we also studied the impact of decreasing the classification categories on over all accuracy of data. Text classification is highly used in sentiment analysis on social media sites such as twitter for realizing people’s opinions about any cause or it is also used to analyze customer’s reviews about certain products or services. Opinion mining is a vital task in data mining and text categorization is a back-bone to opinion mining.Keywords: Natural Language Processing, Ensemble Classifier, Bagging Classifier, AdaBoost
Procedia PDF Downloads 23231700 Evaluating 8D Reports Using Text-Mining
Authors: Benjamin Kuester, Bjoern Eilert, Malte Stonis, Ludger Overmeyer
Abstract:
Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.Keywords: 8D report, complaint management, evaluation system, text-mining
Procedia PDF Downloads 31631699 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 13731698 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 41931697 Leader Self-sacrifice in Sports Organizations
Authors: Stefano Ruggieri, Rubinia C. Bonfanti
Abstract:
Research on leadership in sports organizations has proved extremely fruitful in recent decades, favoring the growing and diffusion of figures such as mental coaches, trainers, etc. Recent scholarly attention on organizations has been directed towards the phenomenon of leader self-sacrifice, wherein leaders who display such behavior are perceived by their followers as more effective, charismatic, and legitimate compared to those who prioritize self-interest. This growing interest reflects the importance of leaders who prioritize the collective welfare over personal gain, as they inspire greater loyalty, trust, and dedication among their followers, ultimately fostering a more cohesive and high-performing team environment. However, there is limited literature on the mechanisms through which self-sacrifice influences both group dynamics (such as cohesion and team identification) and individual factors (such as self-competence). The aim of the study is to analyze the impact of the leader self-sacrifice on cohesion, team identification and self-competence. Team identification is a crucial determinant of individual identity, delineated by the extent to which a team member aligns with a specific organizational team rather than broader social collectives. This association motivates members to synchronize their actions with the collective interests of the group, thereby fostering cohesion among its constituents, and cultivating a shared sense of purpose and unity within the team. In the domain of team sports, particularly soccer and water polo, two studies involving 447 participants (men = 238, women = 209) between 22 and 35 years old (M = 26.36, SD = 5.51) were conducted. The first study employed a correlational methodology to investigate the predictive capacity of self-sacrifice on cohesion, team identification, self-efficacy, and self-competence. The second study utilized an experimental design to explore the relationship between team identification and self-sacrifice. Together, these studies provided comprehensive insights into the multifaceted nature of leader self-sacrifice and its profound implications for group cohesion and individual well-being within organizational settings. The findings underscored the pivotal role of leader self-sacrifice in not only fostering stronger bonds among team members but also in enhancing critical facets of group dynamics, ultimately contributing to the overall effectiveness and success of the team.Keywords: cohesion, leadership, self-sacrifice, sports organizations, team-identification
Procedia PDF Downloads 4631696 Neural Network Monitoring Strategy of Cutting Tool Wear of Horizontal High Speed Milling
Authors: Kious Mecheri, Hadjadj Abdechafik, Ameur Aissa
Abstract:
The wear of cutting tool degrades the quality of the product in the manufacturing processes. The online monitoring of the cutting tool wear level is very necessary to prevent the deterioration of the quality of machining. Unfortunately there is not a direct manner to measure the cutting tool wear online. Consequently we must adopt an indirect method where wear will be estimated from the measurement of one or more physical parameters appearing during the machining process such as the cutting force, the vibrations, or the acoustic emission etc. In this work, a neural network system is elaborated in order to estimate the flank wear from the cutting force measurement and the cutting conditions.Keywords: flank wear, cutting forces, high speed milling, signal processing, neural network
Procedia PDF Downloads 39331695 Analytical Terahertz Characterization of In0.53Ga0.47As Transistors and Homogenous Diodes
Authors: Abdelmadjid Mammeri, Fatima Zohra Mahi, Luca Varani, H. Marinchoi
Abstract:
We propose an analytical model for the admittance and the noise calculations of the InGaAs transistor and diode. The development of the small-signal admittance takes into account the longitudinal and transverse electric fields through a pseudo two-dimensional approximation of the Poisson equation. The frequency-dependent of the small-signal admittance response is determined by the total currents and the potentials matrix relation between the gate and the drain terminals. The noise is evaluated by using the real part of the transistor/diode admittance under a small-signal perturbation. The analytical results show that the admittance spectrum exhibits a series of resonant peaks corresponding to the excitation of plasma waves. The appearance of the resonance is discussed and analyzed as functions of the channel length and the temperature. The model can be used, on one hand; to control the appearance of the plasma resonances, and on other hand; can give significant information about the noise frequency dependence in the InGaAs transistor and diode.Keywords: InGaAs transistors, InGaAs diode, admittance, resonant peaks, plasma waves, analytical model
Procedia PDF Downloads 31631694 Image Processing and Calculation of NGRDI Embedded System in Raspberry
Authors: Efren Lopez Jimenez, Maria Isabel Cajero, J. Irving-Vasqueza
Abstract:
The use and processing of digital images have opened up new opportunities for the resolution of problems of various kinds, such as the calculation of different vegetation indexes, among other things, differentiating healthy vegetation from humid vegetation. However, obtaining images from which these indexes are calculated is still the exclusive subject of active research. In the present work, we propose to obtain these images using a low cost embedded system (Raspberry Pi) and its processing, using a set of libraries of open code called OpenCV, in order to obtain the Normalized Red-Green Difference Index (NGRDI).Keywords: Raspberry Pi, vegetation index, Normalized Red-Green Difference Index (NGRDI), OpenCV
Procedia PDF Downloads 29131693 3D Electromagnetic Mapping of the Signal Strength in Long Term Evolution Technology in the Livestock Department of ESPOCH
Authors: Cinthia Campoverde, Mateo Benavidez, Victor Arias, Milton Torres
Abstract:
This article focuses on the 3D electromagnetic mapping of the intensity of the signal received by a mobile antenna within the open areas of the Department of Livestock of the Escuela Superior Politecnica de Chimborazo (ESPOCH), located in the city of Riobamba, Ecuador. The transmitting antenna belongs to the mobile telephone company ”TUENTI”, and is analyzed in the 2 GHz bands, operating at a frequency of 1940 MHz, using Long Term Evolution (LTE). Power signal strength data in the area were measured empirically using the ”Network Cell Info” application. A total of 170 samples were collected, distributed in 19 concentric circles around the base station. 3 campaigns were carried out at the same time, with similar traffic, and average values were obtained at each point, which varies between -65.33 dBm to -101.67 dBm. Also, the two virtualization software used are Sketchup and Unreal. Finally, the virtualized environment was visualized through virtual reality using Oculus 3D glasses, where the power levels are displayed according to a range of powers.Keywords: reception power, LTE technology, virtualization, virtual reality, power levels
Procedia PDF Downloads 90