Search results for: graphics processing units
1141 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4671140 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality
Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo
Abstract:
Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.Keywords: linear model, models and modeling, probability, randomness, sample
Procedia PDF Downloads 1181139 Magnetohydrodynamic Flow of Viscoelastic Nanofluid and Heat Transfer over a Stretching Surface with Non-Uniform Heat Source/Sink and Non-Linear Radiation
Authors: Md. S. Ansari, S. S. Motsa
Abstract:
In this paper, an analysis has been made on the flow of non-Newtonian viscoelastic nanofluid over a linearly stretching sheet under the influence of uniform magnetic field. Heat transfer characteristics is analyzed taking into the effect of nonlinear radiation and non-uniform heat source/sink. Transport equations contain the simultaneous effects of Brownian motion and thermophoretic diffusion of nanoparticles. The relevant partial differential equations are non-dimensionalized and transformed into ordinary differential equations by using appropriate similarity transformations. The transformed, highly nonlinear, ordinary differential equations are solved by spectral local linearisation method. The numerical convergence, error and stability analysis of iteration schemes are presented. The effects of different controlling parameters, namely, radiation, space and temperature-dependent heat source/sink, Brownian motion, thermophoresis, viscoelastic, Lewis number and the magnetic force parameter on the flow field, heat transfer characteristics and nanoparticles concentration are examined. The present investigation has many industrial and engineering applications in the fields of coatings and suspensions, cooling of metallic plates, oils and grease, paper production, coal water or coal–oil slurries, heat exchangers’ technology, and materials’ processing and exploiting.Keywords: magnetic field, nonlinear radiation, non-uniform heat source/sink, similar solution, spectral local linearisation method, Rosseland diffusion approximation
Procedia PDF Downloads 3721138 BiFeO3-CoFe2O4-PbTiO3 Composites: Structural, Multiferroic and Optical Characteristics
Authors: Nidhi Adhlakha, K. L. Yadav
Abstract:
Three phase magnetoelectric (ME) composites (1-x)(0.7BiFeO3-0.3CoFe2O4)-xPbTiO3 (or equivalently written as (1-x)(0.7BFO-0.3CFO)-xPT) with x variations 0, 0.30, 0.35, 0.40, 0.45 and 1.0 were synthesized using hybrid processing route. The effects of PT addition on structural, multiferroic and optical properties have been subsequently investigated. A detailed Rietveld refinement analysis of X-ray diffraction patterns has been performed, which confirms the presence of structural phases of individual constituents in the composites. Field emission scanning electron microscopy (FESEM) images are taken for microstructural analysis and grain size determination. Transmission electron microscopy (TEM) analysis of 0.3CFO-0.7BFO reveals the average particle size to be lying in the window of 8-10 nm. The temperature dependent dielectric constant at various frequencies (1 kHz, 10 kHz, 50 kHz, 100 kHz and 500 kHz) has been studied and the dielectric study reveals that the increase of dielectric constant and decrease of average dielectric loss of composites with incorporation of PT content. The room temperature ferromagnetic behavior of composites is confirmed through the observation of Magnetization vs. Magnetic field (M-H) hysteresis loops. The variation of magnetization with temperature indicates the presence of spin glass behavior in composites. Magnetoelectric coupling is evidenced in the composites through the observation of the dependence of the dielectric constant on the magnetic field, and magnetodielectric response of 2.05 % is observed for 45 mol% addition of PT content. The fractional change of magnetic field induced dielectric constant can also be expressed as ∆ε_r~γM^2 and the value of γ is found to be ~1.08×10-2 (emu/g)-2 for composite with x=0.40. Fourier transformed infrared (FTIR) spectroscopy of samples is carried out to analyze various bonds formation in the composites.Keywords: composite, X-ray diffraction, dielectric properties, optical properties
Procedia PDF Downloads 3081137 High Temperature Deformation Behavior of Al0.2CoCrFeNiMo0.5 High Entropy alloy
Authors: Yasam Palguna, Rajesh Korla
Abstract:
The efficiency of thermally operated systems can be improved by increasing the operating temperature, thereby decreasing the fuel consumption and carbon footprint. Hence, there is a continuous need for replacing the existing materials with new alloys with higher temperature working capabilities. During the last decade, multi principal element alloys, commonly known as high entropy alloys are getting more attention because of their superior high temperature strength along with good high temperature corrosion and oxidation resistance, The present work focused on the microstructure and high temperature tensile behavior of Al0.2CoCrFeNiMo0.5 high entropy alloy (HEA). Wrought Al0.2CoCrFeNiMo0.5 high entropy alloy, produced by vacuum induction melting followed by thermomechanical processing, is tested in the temperature range of 200 to 900oC. It is exhibiting very good resistance to softening with increasing temperature up to 700oC, and thereafter there is a rapid decrease in the strength, especially beyond 800oC, which may be due to simultaneous occurrence of recrystallization and precipitate coarsening. Further, it is exhibiting superplastic kind of behavior with a uniform elongation of ~ 275 % at 900 oC temperature and 1 x 10-3 s-1 strain rate, which may be due to the presence of fine stable equi-axed grains. Strain rate sensitivity of 0.3 was observed, suggesting that solute drag dislocation glide might be the active mechanism during superplastic kind of deformation. Post deformation microstructure suggesting that cavitation at the sigma phase-matrix interface is the failure mechanism during high temperature deformation. Finally, high temperature properties of the present alloy will be compared with the contemporary high temperature materials such as ferritic, austenitic steels, and superalloys.Keywords: high entropy alloy, high temperature deformation, super plasticity, post-deformation microstructures
Procedia PDF Downloads 1651136 3D Medical Printing the Key Component in Future of Medical Applications
Authors: Zahra Asgharpour, Eric Renteria, Sebastian De Boodt
Abstract:
There is a growing trend towards personalization of medical care, as evidenced by the emphasis on outcomes based medicine, the latest developments in CT and MR imaging and personalized treatment in a variety of surgical disciplines. 3D Printing has been introduced and applied in the medical field since 2000. The first applications were in the field of dental implants and custom prosthetics. According to recent publications, 3D printing in the medical field has been used in a wide range of applications which can be organized into several categories including implants, prosthetics, anatomical models and tissue bioprinting. Some of these categories are still in their infancy stage of the concept of proof while others are in application phase such as the design and manufacturing of customized implants and prosthesis. The approach of 3D printing in this category has been successfully used in the health care sector to make both standard and complex implants within a reasonable amount of time. In this study, some of the clinical applications of 3D printing in design and manufacturing of a patient-specific hip implant would be explained. In cases where patients have complex bone geometries or are undergoing a complex revision on hip replacement, the traditional surgical methods are not efficient, and hence these patients require patient-specific approaches. There are major advantages in using this new technology for medical applications, however, in order to get this technology widely accepted in medical device industry, there is a need for gaining more acceptance from the medical device regulatory offices. This is a challenge that is moving onward and will help the technology find its way at the end as an accepted manufacturing method for medical device industry in an international scale. The discussion will conclude with some examples describing the future directions of 3D Medical Printing.Keywords: CT/MRI, image processing, 3D printing, medical devices, patient specific implants
Procedia PDF Downloads 2981135 Development of Electrospun Porous Carbon Fibers from Cellulose/Polyacrylonitrile Blend
Authors: Zubair Khaliq, M. Bilal Qadir, Amir Shahzad, Zulfiqar Ali, Ahsan Nazir, Ali Afzal, Abdul Jabbar
Abstract:
Carbon fibers are one of the most demanding materials on earth due to their potential application in energy, high strength materials, and conductive materials. The nanostructure of carbon fibers offers enhanced properties of conductivity due to the larger surface area. The next generation carbon nanofibers demand the porous structure as it offers more surface area. Multiple techniques are used to produce carbon fibers. However, electrospinning followed by carbonization of the polymeric materials is easy to carry process on a laboratory scale. Also, it offers multiple diversity of changing parameters to acquire the desired properties of carbon fibers. Polyacrylonitrile (PAN) is the most used material for the production of carbon fibers due to its promising processing parameters. Also, cellulose is one of the highest yield producers of carbon fibers. However, the electrospinning of cellulosic materials is difficult due to its rigid chain structure. The combination of PAN and cellulose can offer a suitable solution for the production of carbon fibers. Both materials are miscible in the mixed solvent of N, N, Dimethylacetamide and lithium chloride. This study focuses on the production of porous carbon fibers as a function of PAN/Cellulose blend ratio, solution properties, and electrospinning parameters. These single polymer and blend with different ratios were electrospun to give fine fibers. The higher amount of cellulose offered more difficulty in electrospinning of nanofibers. After carbonization, the carbon fibers were studied in terms of their blend ratio, surface area, and texture. Cellulose contents offered the porous structure of carbon fibers. Also, the presence of LiCl contributed to the porous structure of carbon fibers.Keywords: cellulose, polyacrylonitrile, carbon nanofibers, electrospinning, blend
Procedia PDF Downloads 2021134 BiLex-Kids: A Bilingual Word Database for Children 5-13 Years Old
Authors: Aris R. Terzopoulos, Georgia Z. Niolaki, Lynne G. Duncan, Mark A. J. Wilson, Antonios Kyparissiadis, Jackie Masterson
Abstract:
As word databases for bilingual children are not available, researchers, educators and textbook writers must rely on monolingual databases. The aim of this study is thus to develop a bilingual word database, BiLex-kids, an online open access developmental word database for 5-13 year old bilingual children who learn Greek as a second language and have English as their dominant one. BiLex-kids is compiled from 120 Greek textbooks used in Greek-English bilingual education in the UK, USA and Australia, and provides word translations in the two languages, pronunciations in Greek, and psycholinguistic variables (e.g. Zipf, Frequency per million, Dispersion, Contextual Diversity, Neighbourhood size). After clearing the textbooks of non-relevant items (e.g. punctuation), algorithms were applied to extract the psycholinguistic indices for all words. As well as one total lexicon, the database produces values for all ages (one lexicon for each age) and for three age bands (one lexicon per age band: 5-8, 9-11, 12-13 years). BiLex-kids provides researchers with accurate figures for a wide range of psycholinguistic variables, making it a useful and reliable research tool for selecting stimuli to examine lexical processing among bilingual children. In addition, it offers children the opportunity to study word spelling, learn translations and listen to pronunciations in their second language. It further benefits educators in selecting age-appropriate words for teaching reading and spelling, while special educational needs teachers will have a resource to control the content of word lists when designing interventions for bilinguals with literacy difficulties.Keywords: bilingual children, psycholinguistics, vocabulary development, word databases
Procedia PDF Downloads 3111133 Overview of Pre-Analytical Lab Errors in a Tertiary Care Hospital at Rawalpindi, Pakistan
Authors: S. Saeed, T. Butt, M. Rehan, S. Khaliq
Abstract:
Objective: To determine the frequency of pre-analytical errors in samples taken from patients for various lab tests at Fauji Foundation Hospital, Rawalpindi. Material and Methods: All the lab specimens for diagnostic purposes received at the lab from Fauji Foundation hospital, Rawalpindi indoor and outdoor patients were included. Total number of samples received in the lab is recorded in the computerized program made for the hospital. All the errors observed for pre-analytical process including patient identification, sampling techniques, test collection procedures, specimen transport/processing and storage were recorded in the log book kept for the purpose. Results: A total of 476616 specimens were received in the lab during the period of study including 237931 and 238685 from outdoor and indoor patients respectively. Forty-one percent of the samples (n=197976) revealed pre-analytical discrepancies. The discrepancies included Hemolyzed samples (34.8%), Clotted blood (27.8%), Incorrect samples (17.4%), Unlabeled samples (8.9%), Insufficient specimens (3.9%), Request forms without authorized signature (2.9%), Empty containers (3.9%) and tube breakage during centrifugation (0.8%). Most of these pre-analytical discrepancies were observed in samples received from the wards revealing that inappropriate sample collection by the medical staff of the ward, as most of the outdoor samples are collected by the lab staff who are properly trained for sample collection. Conclusion: It is mandatory to educate phlebotomists and paramedical staff particularly performing duties in the wards regarding timing and techniques of sampling/appropriate container to use/early delivery of the samples to the lab to reduce pre-analytical errors.Keywords: pre analytical lab errors, tertiary care hospital, hemolyzed, paramedical staff
Procedia PDF Downloads 2041132 A Fact-Finding Analysis on the Expulsions Made under Title 42 in Us
Authors: Avi Shrivastava
Abstract:
Title 42, an emergency health decree, has forced the federal authorities to turn away asylum seekers and all other border crossers since last year. When Title 42 was first deployed in immigration detention centers, where many migrants are held when they arrive at the U.S.-Mexico border, the Trump administration embraced it as a strategy. Expulsions Policy and New Border Challenges will be examined in regard to Title 42 concerns. Humanitarian measures for refugees arriving at the US-Mexico border are the focus of this article. To a large extent, this article addresses the implications of the United States' use of Title 42 in expelling refugees and the possible ramifications of doing away with it. A secondary data collecting strategy was used to gather the information for this study, allowing researchers to examine a large number of previously collected data sets. Information about Title 42 may be found in a variety of places, such as scholarly publications, newspapers, books, and the internet. The inquiry employed qualitative and explanatory research approaches. The claim that 1.7 million individuals were forced to leave the country as a result of it was withdrawn. Since CBP and ICE were limited in their ability to process deportees, it employed a very random patchwork technique in selecting the expelled individuals. As a consequence, repeat offenders, particularly those who were single, got a reduced punishment. The government will be compelled to focus on long-overdue but vital border enhancements if expulsions are halted. Title 42 provisions may help expedite the processing of asylum and other types of humanitarian relief. The government is prepared for an increase in arrivals, but ending the program would lead to a return to arrival levels seen during the Title 42 period.Keywords: migrants, refugees, title 42, medical, trump administration
Procedia PDF Downloads 871131 Sound Absorbing and Thermal Insulating Properties of Natural Fibers (Coir/Jute) Hybrid Composite Materials for Automotive Textiles
Authors: Robel Legese Meko
Abstract:
Natural fibers have been used as end-of-life textiles and made into textile products which have become a well-proven and effective way of processing. Nowadays, resources to make primary synthetic fibers are becoming less and less as the world population is rising. Hence it is necessary to develop processes to fabricate textiles that are easily converted to composite materials. Acoustic comfort is closely related to the concept of sound absorption and includes protection against noise. This research paper presents an experimental study on sound absorption coefficients, for natural fiber composite materials: a natural fiber (Coir/Jute) with different blend proportions of raw materials mixed with rigid polyurethane foam as a binder. The natural fiber composite materials were characterized both acoustically (sound absorption coefficient SAC) and also in terms of heat transfer (thermal conductivity). The acoustic absorption coefficient was determined using the impedance tube method according to the ASTM Standard (ASTM E 1050). The influence of the structure of these materials on the sound-absorbing properties was analyzed. The experimental results signify that the porous natural coir/jute composites possess excellent performance in the absorption of high-frequency sound waves, especially above 2000 Hz, and didn’t induce a significant change in the thermal conductivity of the composites. Thus, the sound absorption performances of natural fiber composites based on coir/jute fiber materials promote environmentally friendly solutions.Keywords: coir/jute fiber, sound absorption coefficients, compression molding, impedance tube, thermal insulating properties, SEM analysis
Procedia PDF Downloads 1101130 Water Management of Erdenet Mining Company
Authors: K. H. Oyuntungalag, Scott Kenner, O. Erdenetuya
Abstract:
The life cycle phases of mining projects are described in this guidance document, and includes initial phases (exploration, feasibility and planning), mine development (construction and operations), closure and reclamation. Initial phases relate to field programs and desktop studies intended to build the data and knowledge base, including the design of water management infrastructure and development during these initial phases. Such a model is essential to demonstrate that the water management plan (WMP) will provide adequate water for the mine operations and sufficient capacity for anticipated flows and volumes, and minimize environmental impacts on the receiving environment. The water and mass balance model must cover the whole mine life cycle, from the start of mine development to a date sufficiently far in the future where the reclaimed landscape is considered self- sustaining following complete closure of the mine (i.e., post- closure). The model simulates the movement of water within the components of the water management infrastructure and project operating areas, and calculates chemical loadings to each mine component. At Erdenet Mining company an initial water balance model reflecting the tailings dam, groundwater seepage and mine process water was developed in collaboration with Dr. Scott Kenner (visiting Fulbright scholar). From this preliminary study the following recommendations were made: 1. Develop a detailed groundwater model to simulate seepage from the tailings dam, 2. Establish an evaporation pan for improving evapotranspiration estimates, and 3. Measure changes in storage of water within the tailings dam and other water storage components within the mine processing.Keywords: evapotranspiration , monitoring program, Erdenet mining, tailings dam
Procedia PDF Downloads 4771129 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 1311128 Coherent All-Fiber and Polarization Maintaining Source for CO2 Range-Resolved Differential Absorption Lidar
Authors: Erwan Negre, Ewan J. O'Connor, Juha Toivonen
Abstract:
The need for CO2 monitoring technologies grows simultaneously with the worldwide concerns regarding environmental challenges. To that purpose, we developed a compact coherent all-fiber ranged-resolved Differential Absorption Lidar (RR-DIAL). It has been designed along a tunable 2x1fiber optic switch set to a frequency of 1 Hz between two Distributed FeedBack (DFB) lasers emitting in the continuous-wave mode at 1571.41 nm (absorption line of CO2) and 1571.25 nm (CO2 absorption-free line), with linewidth and tuning range of respectively 1 MHz and 3 nm over operating wavelength. A three stages amplification through Erbium and Erbium-Ytterbium doped fibers coupled to a Radio Frequency (RF) driven Acousto-Optic Modulator (AOM) generates 100 ns pulses at a repetition rate from 10 to 30 kHz with a peak power up to 2.5 kW and a spatial resolution of 15 m, allowing fast and highly resolved CO2 profiles. The same afocal collection system is used for the output of the laser source and the backscattered light which is then directed to a circulator before being mixed with the local oscillator for heterodyne detection. Packaged in an easily transportable box which also includes a server and a Field Programmable Gate Array (FPGA) card for on-line data processing and storing, our setup allows an effective and quick deployment for versatile in-situ analysis, whether it be vertical atmospheric monitoring, large field mapping or sequestration site continuous oversight. Setup operation and results from initial field measurements will be discussed.Keywords: CO2 profiles, coherent DIAL, in-situ atmospheric sensing, near infrared fiber source
Procedia PDF Downloads 1281127 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction
Authors: Akaki Qatamidze
Abstract:
Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.Keywords: building, reinforced concrete, expertise, stone structures
Procedia PDF Downloads 3361126 English Writing Anxiety in Debate Writing among Japanese Senior High School EFL Learners: Sources, Effects and Implication
Authors: Maria Lita Sudo
Abstract:
The debate is an effective tool in cultivating critical thinking skills in English classes. It involves writing evidence-based arguments about a resolution in a form of constructive speech and oral discussion using constructive speech, which will then be attacked and defended. In the process of writing, EFL learners may experience anxiety, an emotional problem that affects writing achievement and cognitive processing. Thus, this study explored the sources and effect of English writing anxiety in the context of debate writing with a view to providing EFL teachers pedagogical suggestions in alleviating English writing anxiety in debate writing. The participants of this study are 95 Japanese senior high school EFL learners and 3 Japanese senior high school English teachers. In selecting the participants, opportunity sampling was employed and consent from Japanese English teachers was sought. Data were collected thru (1) observation (2) open-ended questionnaire and (3) semi-structured interview. This study revealed that not all teachers of English in the context of this study recognize the existence of English writing anxiety among their students and that the very nature of the debate, in general, may also be a source of English writing anxiety in the context of debate writing. The interview revealed that English writing anxiety affects students’ ability to retrieve L2 vocabulary. Further, this study revealed different sources of writing anxiety in debate writing, which can be categorized into four main categories: (1) L2 linguistic ability-related factors (2) instructional –related factors, (3) interpersonal-related factors, and (4) debate- related factors. Based on the findings, recommendations for EFL teachers and EFL learners in managing writing anxiety in debate writing are provided.Keywords: debate, EFL learners, English writing anxiety, sources
Procedia PDF Downloads 1371125 Coal Preparation Plant:Technology Overview and New Adaptations
Authors: Amit Kumar Sinha
Abstract:
A coal preparation plant typically operates with multiple beneficiation circuits to process individual size fractions of coal obtained from mine so that the targeted overall plant efficiency in terms of yield and ash is achieved. Conventional coal beneficiation plant in India or overseas operates generally in two methods of processing; coarse beneficiation with treatment in dense medium cyclones or in baths and fines beneficiation with treatment in flotation cell. This paper seeks to address the proven application of intermediate circuit along with coarse and fines circuit in Jamadoba New Coal Preparation Plant of capacity 2 Mt/y to treat -0.5 mm+0.25 mm size particles in reflux classifier. Previously this size of particles was treated directly in Flotation cell which had operational and metallurgical limitations which will be discussed in brief in this paper. The paper also details test work results performed on the representative samples of TSL coal washeries to determine the top size of intermediate and fines circuit and discusses about the overlapping process of intermediate circuit and how it is process wise suitable to beneficiate misplaced particles from coarse circuit and fines circuit. This paper also compares the separation efficiency (Ep) of various intermediate circuit process equipment and tries to validate the use of reflux classifier over fine coal DMC or spirals. An overview of Modern coal preparation plant treating Indian coal especially Washery Grade IV coal with reference to Jamadoba New Coal Preparation Plant which was commissioned in 2018 with basis of selection of equipment and plant profile, application of reflux classifier in intermediate circuit and process design criteria is also outlined in this paper.Keywords: intermediate circuit, overlapping process, reflux classifier
Procedia PDF Downloads 1361124 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools
Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla
Abstract:
The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.Keywords: curriculum, development, Arabic language, non-native, skills
Procedia PDF Downloads 841123 The Effect of Austenitization Conditioning on the Mechanical Properties of Cr-Mo-V Hot Work Tool Steel with Different Nitrogen Addition
Authors: Iting Chiang, Cheng-Yu Wei, Chin-Teng Kuo, Po-Sheng Hsu, Yo-Lun Yang, Yung-Chang Kang, Chien-Chon Chen, Chih-Yuan Chen
Abstract:
In recent years, it is reported that microalloying of nitrogen atoms within traditional Cr-Mo-V hot work tool steels can achieve better high temperature mechanical properties, which thus leads to such metallurgical approach widely utilized in the several commercial advanced hot work tool steels. Although the performance of hot work tool steel can be improved better by alloy composition design strategy, the influence of processing parameters on the mechanical property, especially on the service life of hot work tool steel, is still not fully understood yet. A longer service life of hot work tool steel can decrease the manufacturing cost effectively and thus become a research hot spot. According to several previous studies, it is generally acknowledged the service life of hot work tool steels can be increased effectively as the steels possessing higher hardness and toughness due to the formation and propagation of microcracks within the steel can be inhibited effectively. Therefore, in the present research, the designed experiments are primarily to explore the synergistic effect of nitrogen content and austenitization conditioning on the mechanical properties of hot work tool steels has been conducted and analyzed. No matter the nitrogen content, the results indicated the hardness of hot work tool steels increased as the austenitization treatment executed at higher temperature. On the other hand, an optimum toughness of hot work tool steel can be achieved as the austenitization treatment performed at a suitable temperature range. The possible explanation of such metallurgical phenomenon has been also proposed and analyzed in the present research.Keywords: hot work tool steel, Cr-Mo-V, toughness, hardness, TEM
Procedia PDF Downloads 591122 Defense Priming from Egg to Larvae in Litopenaeus vannamei with Non-Pathogenic and Pathogenic Bacteria Strains
Authors: Angelica Alvarez-Lee, Sergio Martinez-Diaz, Jose Luis Garcia-Corona, Humberto Lanz-Mendoza
Abstract:
World aquaculture is always looking for improvements to achieve productions with high yields avoiding the infection by pathogenic agents. The best way to achieve this is to know the biological model to create alternative treatments that could be applied in the hatcheries, which results in greater economic gains and improvements in human public health. In the last decade, immunomodulation in shrimp culture with probiotics, organic acids and different carbon sources has gained great interest, mainly in larval and juvenile stages. Immune priming is associated with a strong protective effect against a later pathogen challenge. This work provides another perspective about immunostimulation from spawning until hatching. The stimulation happens during development embryos and generates resistance to infection by pathogenic bacteria. Massive spawnings of white shrimp L. vannamei were obtained and placed in experimental units with 700 mL of sterile seawater at 30 °C, salinity of 28 ppm and continuous aeration at a density of 8 embryos.mL⁻¹. The immunostimulating effect of three death strains of non-pathogenic bacterial (Escherichia coli, Staphylococcus aureus and Bacillus subtilis) and a pathogenic strain for white shrimp (Vibrio parahaemolyticus) was evaluated. The strains killed by heat were adjusted to O.D. 0.5, at A 600 nm, and directly added to the seawater of each unit at a ratio of 1/100 (v/v). A control group of embryos without inoculum of dead bacteria was kept under the same physicochemical conditions as the rest of the treatments throughout the experiment and used as reference. The duration of the stimulus was 12 hours, then, the larvae that hatched were collected, counted and transferred to a new experimental unit (same physicochemical conditions but at a salinity of 28 ppm) to carry out a challenge of infection against the pathogen V. parahaemolyticus, adding directly to seawater an amount 1/100 (v/v) of the live strain adjusted to an OD 0.5; at A 600 nm. Subsequently, 24 hrs after infection, nauplii survival was evaluated. The results of this work shows that, after 24 hrs, the hatching rates of immunostimulated shrimp embryos with the dead strains of B. subtillis and V. parahaemolyticus are significantly higher compared to the rest of the treatments and the control. Furthermore, survival of L. vanammei after a challenge of infection of 24 hrs against the live strain of V. parahaemolyticus is greater (P < 0.05) in the larvae immunostimulated during the embryonic development with the dead strains B. subtillis and V. parahaemolyticus, followed by those that were treated with E. coli. In summary superficial antigens can stimulate the development cells to promote hatching and can have normal development in agreeing with the optical observations, plus exist a differential response effect between each treatment post-infection. This research provides evidence of the immunostimulant effect of death pathogenic and non-pathogenic bacterial strains in the rate of hatching and oversight of shrimp L. vannamei during embryonic and larval development. This research continues evaluating the effect of these death strains on the expression of genes related to the defense priming in larvae of L. vannamei that come from massive spawning in hatcheries before and after the infection challenge against V. parahaemolyticus.Keywords: immunostimulation, L. vannamei, hatching, survival
Procedia PDF Downloads 1421121 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System
Authors: Juzhong Tan, William Kerr
Abstract:
Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.Keywords: cocoa bean, conching, electronic nose, genetic programming
Procedia PDF Downloads 2551120 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning
Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath
Abstract:
The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.Keywords: BLIP, fMRI, latent diffusion model, neural perception.
Procedia PDF Downloads 681119 Design of Bacterial Pathogens Identification System Based on Scattering of Laser Beam Light and Classification of Binned Plots
Authors: Mubashir Hussain, Mu Lv, Xiaohan Dong, Zhiyang Li, Bin Liu, Nongyue He
Abstract:
Detection and classification of microbes have a vast range of applications in biomedical engineering especially in detection, characterization, and quantification of bacterial contaminants. For identification of pathogens, different techniques are emerging in the field of biomedical engineering. Latest technology uses light scattering, capable of identifying different pathogens without any need for biochemical processing. Bacterial Pathogens Identification System (BPIS) which uses a laser beam, passes through the sample and light scatters off. An assembly of photodetectors surrounded by the sample at different angles to detect the scattering of light. The algorithm of the system consists of two parts: (a) Library files, and (b) Comparator. Library files contain data of known species of bacterial microbes in the form of binned plots, while comparator compares data of unknown sample with library files. Using collected data of unknown bacterial species, highest voltage values stored in the form of peaks and arranged in 3D histograms to find the frequency of occurrence. Resulting data compared with library files of known bacterial species. If sample data matching with any library file of known bacterial species, sample identified as a matched microbe. An experiment performed to identify three different bacteria particles: Enterococcus faecalis, Pseudomonas aeruginosa, and Escherichia coli. By applying algorithm using library files of given samples, results were compromising. This system is potentially applicable to several biomedical areas, especially those related to cell morphology.Keywords: microbial identification, laser scattering, peak identification, binned plots classification
Procedia PDF Downloads 1501118 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control
Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol
Abstract:
Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics
Procedia PDF Downloads 2091117 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1551116 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism
Authors: Yuri S. Djikaev
Abstract:
A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets
Procedia PDF Downloads 3951115 Patterns of Change in Perception of Imagined and Physically Induced Pain over the Course of Repeated Thermal Stimulations
Authors: Boroka Gács, Tibor Szolcsányi, Árpad Csathó
Abstract:
Background: Individuals frequently show habituation to repeated noxious heat. However, given the defensive function of human pain processing, it is reasonable to assume that individuals imagine that they would become increasingly sensitive to repeated thermal pain stimuli. To the best of the authors' knowledge, no previous studies have, however, been addressed to this assumption. Therefore, in the current study, we investigated how healthy human individuals imagine the intensity of repeated thermal pain stimulations, and compared this with the intensity ratings given after physically induced thermal pain trials. Methods: Healthy participants (N = 20) gave pain intensity ratings in two conditions: imagined and real thermal pain. In the real pain condition thermal pain stimuli of two intensities (minimal and moderate pain) were delivered in four consecutive trials. The duration of the peak temperature was 20s, and stimulation was always delivered to the same location. In each trial, participants rated the pain intensity twice, 5s and 15s after the onset of the peak temperature. In the imagined pain condition, participants were subjected to a reference pain stimulus and then asked to imagine and rate the same sequence of stimulations as in the induced pain condition. Results: Ratings of imagined pain and physically induced pain followed opposite courses over repeated stimulation: Ratings of imagined pain indicated sensitization whereas ratings for physically induced pain indicated habituation. The findings were similar for minimal and moderate pain intensities. Conclusions: The findings suggest that, rather than habituating to pain, healthy individuals imagine that they would become increasingly sensitive to repeated thermal pain stimuli.Keywords: habituation, imagined pain, pain perception, thermal stimulation
Procedia PDF Downloads 2371114 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 1241113 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 2591112 Strain Sensing Seams for Monitoring Body Movement
Authors: Sheilla Atieno Odhiambo, Simona Vasile, Alexandra De Raeve, Ann Schwarz
Abstract:
Strain sensing seams have been developed by integrating conductive sewing threads in different types of seams design on a fabric typical for sports clothing using sewing technology. The aim is to have a simple integrated textile strain sensor that can be applied to sports clothing to monitor the movements of the upper body parts of the user during sports. Different types of commercially available sewing threads were used as the bobbin thread in the production of different architectural seam sensors. These conductive sewing threads have been integrated into seams in particular designs using specific seam types. Some of the threads are delicate and needed to be laid into the seam with as little friction as possible and less tension; thus, they could only be sewn in as the bobbin thread and not the needle thread. Stitch type 304; 406; 506; 601;602; 605. were produced. The seams were made on a fabric of 80% polyamide 6.6 and 20% elastane. The seams were cycled(stretch-release-stretch) for five cycles and up to 44 cycles following EN ISO 14704-1: 2005 (modified), using a tensile instrument and the changes in the resistance of the seams with time were recorded using Agilent meter U1273A. Both experiments were conducted simultaneously on the same seam sample. Sensing functionality, among which is sensor gauge and reliability, were evaluated on the promising sensor seams. The results show that the sensor seams made from HC Madeira 40 conductive yarns performed better inseam stitch 304 and 602 compared to the other combination of stitch type and conductive sewing threads. These sensing seams 304, 406 and 602 will further be interconnected to our developed processing and communicating unit and further integrated into a sports clothing prototype that can track body posture. This research is done within the framework of the project SmartSeam.Keywords: conductive sewing thread, sensing seams, smart seam, sewing technology
Procedia PDF Downloads 190