Search results for: fracture classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2778

Search results for: fracture classification

1188 Study on the Fabrication and Mechanical Characterization of Pineapple Fiber-Reinforced Unsaturated Polyester Resin Based Composites: Effect of Gamma Irradiation

Authors: Kamrun N. Keya, Nasrin A. Kona, Ruhul A. Khan

Abstract:

Pineapple leaf fiber (PALF) reinforced polypropylene (PP) based composites were fabricated by a conventional compression molding technique. In this investigation, PALF composites were manufactured using different percentages of fiber, which were varying from 25-50% on the total weight of the composites. To fabricate the PALF/PP composites, untreated and treated fibers were selected. A systematic study was done to observe the physical, mechanical and interfacial behavior of the composites. In this study, mechanical properties of the composites such as tensile, impact and bending properties were observed precisely. It was found that 45wt% of fiber composites showed better mechanical properties than others. Maximum tensile strength (TS) and bending strength (BS) was observed, 65 MPa and 50 MPa respectively, whereas the highest tensile modulus (TM) and bending modulus (BM) was examined, 1.7 GPa and 0.85 GPa respectively. The PALF/PP based composites were treated with irradiated under gamma radiation (the source strength 50 kCi Cobalt-60) of various doses (2.5 kGy to 10 kGy). The effect of gamma radiation on the composites was also investigated, and it found that the effect of 5.0 kGy (i.e. units for radiation measurement is 'gray', kGy=kilogray ) gamma dose showed better mechanical properties than other doses. The values of TS, BS, TM, and BM of the irradiated (5.0 kGy) composites were found to improve by 19%, 23%, 17% and 25 % over non-irradiated composites. After flexural testing, fracture sides of the untreated and treated both composites were studied by scanning electron microscope (SEM). SEM results of the treated PALF/PP based composites showed better fiber-matrix adhesion and interfacial bonding than untreated PALF/PP based composites. Water uptake and soil degradation tests of untreated and treated composites were also investigated.

Keywords: PALF, polypropylene, compression molding technique, gamma radiation, mechanical properties, scanning electron microscope

Procedia PDF Downloads 151
1187 Negative Pressure Wound Therapy in Complex Injuries of the Limbs

Authors: Mihail Nagea, Olivera Lupescu, Nicolae Ciurea, Alexandru Dimitriu, Alina Grosu

Abstract:

Introduction: As severe open injuries are more and more frequent in modern traumatology, threatening not only the integrity of the affected limb but even the life of the patients, new methods desired to cope with the consequences of these traumas were described. Vacuum therapy is one such method which has been described as enhancing healing in trauma with extensive soft-tissue injuries, included those with septic complications. Material and methods: Authors prospectively analyze 15 patients with severe lower limb trauma with MESS less than 6, with considerable soft tissue loss following initial debridement and fracture fixation. The patients needed serial debridements and vacuum therapy was applied after delayed healing due to initial severity of the trauma, for an average period of 12 days (7 - 23 days).In 7 cases vacuum therapy was applied for septic complications. Results: Within the study group, there were no local complications; secondary debridements were performed for all the patients and vacuum system was re-installed after these debridements. No amputations were needed. Medical records were reviewed in order to compare the outcome of the patients: the hospital stay, anti-microbial therapy, time to healing of the bone and soft tissues (there is no standard group to be compared with) and the result showed considerable improvements in the outcome of the patients. Conclusion: Vacuum therapy improves healing of the soft tissues, including those infected; hospital stay and the number of secondary necessary procedures are reduced. Therefore it is considered a valuable support in treating trauma of the limbs with severe soft tissue injuries.

Keywords: complex injuries, negative pressure, open fractures, wound therapy

Procedia PDF Downloads 295
1186 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 171
1185 Fabrication and Characterization of Ceramic Matrix Composite

Authors: Yahya Asanoglu, Celaletdin Ergun

Abstract:

Ceramic-matrix composites (CMC) have significant prominence in various engineering applications because of their heat resistance associated with an ability to withstand the brittle type of catastrophic failure. In this study, specific raw materials have been chosen for the purpose of having suitable CMC material for high-temperature dielectric applications. CMC material will be manufactured through the polymer infiltration and pyrolysis (PIP) method. During the manufacturing process, vacuum infiltration and autoclave will be applied so as to decrease porosity and obtain higher mechanical properties, although this advantage leads to a decrease in the electrical performance of the material. Time and temperature adjustment in pyrolysis parameters provide a significant difference in the properties of the resulting material. The mechanical and thermal properties will be investigated in addition to the measurement of dielectric constant and tangent loss values within the spectrum of Ku-band (12 to 18 GHz). Also, XRD, TGA/PTA analyses will be employed to prove the transition of precursor to ceramic phases and to detect critical transition temperatures. Additionally, SEM analysis on the fracture surfaces will be performed to see failure mechanism whether there is fiber pull-out, crack deflection and others which lead to ductility and toughness in the material. In this research, the cost-effectiveness and applicability of the PIP method will be proven in the manufacture of CMC materials while optimization of pyrolysis time, temperature and cycle for specific materials is detected by experiment. Also, several resins will be shown to be a potential raw material for CMC radome and antenna applications. This research will be distinguished from previous related papers due to the fact that in this research, the combination of different precursors and fabrics will be experimented with to specify the unique cons and pros of each combination. In this way, this is an experimental sum of previous works with unique PIP parameters and a guide to the manufacture of CMC radome and antenna.

Keywords: CMC, PIP, precursor, quartz

Procedia PDF Downloads 160
1184 Analytical Study of Data Mining Techniques for Software Quality Assurance

Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar

Abstract:

Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.

Keywords: data mining, defect prediction, missing requirements, software quality

Procedia PDF Downloads 469
1183 Identifying Promoters and Their Types Based on a Two-Layer Approach

Authors: Bin Liu

Abstract:

Prokaryotic promoter, consisted of two short DNA sequences located at in -35 and -10 positions, is responsible for controlling the initiation and expression of gene expression. Different types of promoters have different functions, and their consensus sequences are similar. In addition, their consensus sequences may be different for the same type of promoter, which poses difficulties for promoter identification. Unfortunately, all existing computational methods treat promoter identification as a binary classification task and can only identify whether a query sequence belongs to a specific promoter type. It is desired to develop computational methods for effectively identifying promoters and their types. Here, a two-layer predictor is proposed to try to deal with the problem. The first layer is designed to predict whether a given sequence is a promoter and the second layer predicts the type of promoter that is judged as a promoter. Meanwhile, we also analyze the importance of feature and sequence conversation in two aspects: promoter identification and promoter type identification. To the best knowledge of ours, it is the first computational predictor to detect promoters and their types.

Keywords: promoter, promoter type, random forest, sequence information

Procedia PDF Downloads 185
1182 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 126
1181 Exploration of Industrial Symbiosis Opportunities with an Energy Perspective

Authors: Selman Cagman

Abstract:

A detailed analysis is made within an organized industrial zone (OIZ) that has 1165 production facilities such as manufacturing of furniture, fabricated metal products (machinery and equipment), food products, plastic and rubber products, machinery and equipment, non-metallic mineral products, electrical equipment, textile products, and manufacture of wood and cork products. In this OIZ, a field study is done by choosing some facilities that can represent the whole OIZ sectoral distribution. In this manner, there are 207 facilities included to the site visit, and there is a 17 questioned survey carried out with each of them to assess their inputs, outputs, and waste amounts during manufacturing processes. The survey result identify that MDF/Particleboard and chipboard particles, textile, food, foam rubber, sludge (treatment sludge, phosphate-paint sludge, etc.), plastic, paper and packaging, scrap metal (aluminum shavings, steel shavings, iron scrap, profile scrap, etc.), slag (coal slag), ceramic fracture, ash from the fluidized bed are the wastes come from these facilities. As a result, there are 5 industrial symbiosis projects established with this study. One of the projects is a 2.840 kW capacity Integrated Biomass Based Waste Incineration-Energy Production Facility running on 35.000 tons/year of MDF particles and chipboard waste. Another project is a biogas plant with 225 tons/year whey, 100 tons/year of sesame husk, 40 tons/year of burnt wafer dough, and 2.000 tons/year biscuit waste. These two plants investment costs and operational costs are given in detail. The payback time of the 2.840 kW plant is almost 4 years and the biogas plant is around 6 years.

Keywords: industrial symbiosis, energy, biogas, waste to incineration

Procedia PDF Downloads 107
1180 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology

Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik

Abstract:

Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.

Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms

Procedia PDF Downloads 82
1179 The Evaluation for Interfacial Adhesion between SOFC and Metal Adhesive in the High Temperature Environment

Authors: Sang Koo Jeon, Seung Hoon Nahm, Oh Heon Kwon

Abstract:

The unit cell of solid oxide fuel cell (SOFC) must be stacked as several layers type to obtain the high power. The most of researcher have concerned about the performance of stacked SOFC rather than the structural stability of stacked SOFC and especially interested how to design for reducing the electrical loss and improving the high efficiency. Consequently, the stacked SOFC able to produce the electrical high power and related parts like as manifold, gas seal, bipolar plate were developed to optimize the stack design. However, the unit cell of SOFC was just layered on the interconnector without the adhesion and the hydrogen and oxygen were injected to the interfacial layer in the high temperature. On the operating condition, the interfacial layer can be the one of the weak point in the stacked SOFC. Therefore the evaluation of the structural safety for the failure is essentially needed. In this study, interfacial adhesion between SOFC and metal adhesive was estimated in the high temperature environment. The metal adhesive was used to strongly connect the unit cell of SOFC with interconnector and provide the electrical conductivity between them. The four point bending test was performed to measure the interfacial adhesion. The unit cell of SOFC and SiO2 wafer were diced and then attached by metal adhesive. The SiO2 wafer had the center notch to initiate a crack from the tip of the notch. The modified stereomicroscope combined with the CCD camera and system for measuring the length was used to observe the fracture behavior. Additionally, the interfacial adhesion was evaluated in the high temperature condition because the metal adhesive was affected by high temperature. Also the specimen was exposed in the furnace during several hours and then the interfacial adhesion was evaluated. Finally, the interfacial adhesion energy was quantitatively determined and compared in the each condition.

Keywords: solid oxide fuel cell (SOFC), metal adhesive, adhesion, high temperature

Procedia PDF Downloads 521
1178 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 28
1177 Egg Yolk Peptide Stimulated Osteogenic Gene Expression

Authors: Hye Kyung Kim, Myung-Gyou Kim, Kang-Hyun Leem

Abstract:

Postmenopausal osteoporosis is characterized by low bone density which leads to increased bone fragility and greater susceptibility to fracture. Current treatments for osteoporosis are dominated by drugs that inhibit bone resorption although they also suppress bone formation that may contribute to pathogenesis of osteonecrosis. To restore the extensive bone loss, there is a great need for anabolic treatments that induce osteoblasts to build new bone. Pre-osteoblastic cells produce proteins of the extra-cellular matrix, including type I collagen at first, and then to successively produce alkaline phosphatase (ALP) and osteocalcin during differentiation to osteoblasts. Finally, osteoblasts deposit calcium. Present study investigated the effects of egg yolk peptide (EYP) on osteogenic activities and bone matrix gene expressions in human osteoblastic MG-63 cells. The effects of EYP on cell proliferation, alkaline phosphatase (ALP) activity, collagen synthesis, and mineralization were measured. The expression of osteogenic genes including COL1A1 (collagen, type I, alpha 1), ALP, BGLAP (osteocalcin), and SPP1 (secreted phosphoprotein 1, osteopontin) were measured by quantitative realtime PCR. EYP dose-dependently increased MG-63 cell proliferation, ALP activity, collagen synthesis, and calcium deposition. Furthermore, COL1A1, ALP, and SPP1 gene expressions were increased by EYP treatment. Present study suggested that EYP treatment enhanced osteogenic activities and increased bone matrix osteogenicgenes. These results could provide a mechanistic explanation for the bone-strengthening effects of EYP.

Keywords: egg yolk peptide, osteoblastic MG-63 cells, alkaline phosphatase, collagen synthesis, osteogenic genes, COL1A1, osteocalcin, osteopontin

Procedia PDF Downloads 388
1176 Facial Emotion Recognition with Convolutional Neural Network Based Architecture

Authors: Koray U. Erbas

Abstract:

Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.

Keywords: convolutional neural network, deep learning, deep learning based FER, facial emotion recognition

Procedia PDF Downloads 275
1175 Threat Analysis: A Technical Review on Risk Assessment and Management of National Testing Service (NTS)

Authors: Beenish Urooj, Ubaid Ullah, Sidra Riasat

Abstract:

National Testing Service-Pakistan (NTS) is an agency in Pakistan that conducts student success appraisal examinations. In this research paper, we must present a security model for the NTS organization. The security model will depict certain security countermeasures for a better defense against certain types of breaches and system malware. We will provide a security roadmap, which will help the company to execute its further goals to maintain security standards and policies. We also covered multiple aspects in securing the environment of the organization. We introduced the processes, architecture, data classification, auditing approaches, survey responses, data handling, and also training and awareness of risk for the company. The primary contribution is the Risk Survey, based on the maturity model meant to assess and examine employee training and knowledge of risks in the company's activities.

Keywords: NTS, risk assessment, threat factors, security, services

Procedia PDF Downloads 71
1174 Development and Characterization of Sandwich Bio-Composites Based on Short Alfa Fiber and Jute Fabric

Authors: Amine Rezzoug, Selsabil Rokia Laraba, Mourad Ancer, Said Abdi

Abstract:

Composite materials are taking center stage in different fields thanks to their mechanical characteristics and their ease of preparation. Environmental constraints have led to the development of composite with natural reinforcements. The sandwich structure has the advantage to have good flexural proprieties for low density, which is why it was chosen in this work. The development of these materials is related to an energy saving strategy and environmental protection. The present work refers to the study of the development and characterization of sandwiches composites based on hybrids laminates with natural reinforcements (Alfa and Jute), a metal fabric was introduced into composite in order to have a compromise between weight and properties. We use different configurations of reinforcements (jute, metallic fabric) to develop laminates in order to use them as thin facings for sandwiches materials. While the core was an epoxy matrix reinforced with Alfa short fibers, a chemical treatment sodium hydroxide was cared to improve the adhesion of the Alfa fibers. The mechanical characterization of our materials was made by the tensile and bending test, to highlight the influence of jute and Alfa. After testing, the fracture surfaces are observed by scanning electron microscopy (SEM). Optical microscopy allowed us to calculate the degree of porosity and to observe the morphology of the individual layers. Laminates based on jute fabric have shown better results in tensile test as well as to bending, compared to those of the metallic fabric (100%, 65%). Sandwich Panels were also characterized in terms of bending test. Results we had provide, shows that this composite has sufficient properties for possible replacing conventional composite materials by considering the environmental factors.

Keywords: bending test, bio-composites, sandwiches, tensile test

Procedia PDF Downloads 435
1173 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 530
1172 Machine Learning Approach for Lateralization of Temporal Lobe Epilepsy

Authors: Samira-Sadat JamaliDinan, Haidar Almohri, Mohammad-Reza Nazem-Zadeh

Abstract:

Lateralization of temporal lobe epilepsy (TLE) is very important for positive surgical outcomes. We propose a machine learning framework to ultimately identify the epileptogenic hemisphere for temporal lobe epilepsy (TLE) cases using magnetoencephalography (MEG) coherence source imaging (CSI) and diffusion tensor imaging (DTI). Unlike most studies that use classification algorithms, we propose an effective clustering approach to distinguish between normal and TLE cases. We apply the famous Minkowski weighted K-Means (MWK-Means) technique as the clustering framework. To overcome the problem of poor initialization of K-Means, we use particle swarm optimization (PSO) to effectively select the initial centroids of clusters prior to applying MWK-Means. We demonstrate that compared to K-means and MWK-means independently, this approach is able to improve the result of a benchmark data set.

Keywords: temporal lobe epilepsy, machine learning, clustering, magnetoencephalography

Procedia PDF Downloads 157
1171 Enhancing Fall Detection Accuracy with a Transfer Learning-Aided Transformer Model Using Computer Vision

Authors: Sheldon McCall, Miao Yu, Liyun Gong, Shigang Yue, Stefanos Kollias

Abstract:

Falls are a significant health concern for older adults globally, and prompt identification is critical to providing necessary healthcare support. Our study proposes a new fall detection method using computer vision based on modern deep learning techniques. Our approach involves training a trans- former model on a large 2D pose dataset for general action recognition, followed by transfer learning. Specifically, we freeze the first few layers of the trained transformer model and train only the last two layers for fall detection. Our experimental results demonstrate that our proposed method outperforms both classical machine learning and deep learning approaches in fall/non-fall classification. Overall, our study suggests that our proposed methodology could be a valuable tool for identifying falls.

Keywords: healthcare, fall detection, transformer, transfer learning

Procedia PDF Downloads 150
1170 Multimodal Characterization of Emotion within Multimedia Space

Authors: Dayo Samuel Banjo, Connice Trimmingham, Niloofar Yousefi, Nitin Agarwal

Abstract:

Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities.

Keywords: affective computing, deep learning, emotion recognition, multimodal

Procedia PDF Downloads 160
1169 Intelligent Grading System of Apple Using Neural Network Arbitration

Authors: Ebenezer Obaloluwa Olaniyi

Abstract:

In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.

Keywords: image processing, neural network, apple, intelligent system

Procedia PDF Downloads 399
1168 Continuous Improvement Programme as a Strategy for Technological Innovation in Developing Nations. Nigeria as a Case Study

Authors: Sefiu Adebowale Adewumi

Abstract:

Continuous improvement programme (CIP) adopts an approach to improve organizational performance with small incremental steps over time. In this approach, it is not the size of each step that is important, but the likelihood that the improvements will be ongoing. Many companies in developing nations are now complementing continuous improvement with innovation, which is the successful exploitation of new ideas. Focus area of CIP in the organization was in relation to the size of the organizations and also in relation to the generic classification of these organizations. Product quality was prevalent in the manufacturing industry while manpower training and retraining and marketing strategy were emphasized for improvement to be made in the service, transport and supply industries. However, focus on innovation in raw materials, process and methods are needed because these are the critical factors that influence product quality in the manufacturing industries.

Keywords: continuous improvement programme, developing countries, generic classfications, technological innovation

Procedia PDF Downloads 190
1167 Compression Strength of Treated Fine-Grained Soils with Epoxy or Cement

Authors: M. Mlhem

Abstract:

Geotechnical engineers face many problematic soils upon construction and they have the choice for replacing these soils with more appropriate soils or attempting to improve the engineering properties of the soil through a suitable soil stabilization technique. Mostly, improving soils is environmental, easier and more economical than other solutions. Stabilization soils technique is applied by introducing a cementing agent or by injecting a substance to fill the pore volume. Chemical stabilizers are divided into two groups: traditional agents such as cement or lime and non-traditional agents such as polymers. This paper studies the effect of epoxy additives on the compression strength of four types of soil and then compares with the effect of cement on the compression strength for the same soils. Overall, the epoxy additives are more effective in increasing the strength for different types of soils regardless its classification. On the other hand, there was no clear relation between studied parameters liquid limit, passing No.200, unit weight and between the strength of samples for different types of soils.

Keywords: additives, clay, compression strength, epoxy, stabilization

Procedia PDF Downloads 128
1166 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 71
1165 Challenges and Opportunities: One Stop Processing for the Automation of Indonesian Large-Scale Topographic Base Map Using Airborne LiDAR Data

Authors: Elyta Widyaningrum

Abstract:

The LiDAR data acquisition has been recognizable as one of the fastest solution to provide the basis data for topographic base mapping in Indonesia. The challenges to accelerate the provision of large-scale topographic base maps as a development plan basis gives the opportunity to implement the automated scheme in the map production process. The one stop processing will also contribute to accelerate the map provision especially to conform with the Indonesian fundamental spatial data catalog derived from ISO 19110 and geospatial database integration. Thus, the automated LiDAR classification, DTM generation and feature extraction will be conducted in one GIS-software environment to form all layers of topographic base maps. The quality of automated topographic base map will be assessed and analyzed based on its completeness, correctness, contiguity, consistency and possible customization.

Keywords: automation, GIS environment, LiDAR processing, map quality

Procedia PDF Downloads 369
1164 Human Errors in IT Services, HFACS Model in Root Cause Categorization

Authors: Kari Saarelainen, Marko Jantti

Abstract:

IT service trending of root causes of service incidents and problems is an important part of proactive problem management and service improvement. Human error related root causes are an important root cause category also in IT service management, although it’s proportion among root causes is smaller than in the other industries. The research problem in this study is: How root causes of incidents related to human errors should be categorized in an ITSM organization to effectively support service improvement. Categorization based on IT service management processes and based on Human Factors Analysis and Classification System (HFACS) taxonomy was studied in a case study. HFACS is widely used in human error root cause categorization across many industries. Combining these two categorization models in a two dimensional matrix was found effective, yet impractical for daily work.

Keywords: IT service management, ITIL, incident, problem, HFACS, swiss cheese model

Procedia PDF Downloads 490
1163 Function Approximation with Radial Basis Function Neural Networks via FIR Filter

Authors: Kyu Chul Lee, Sung Hyun Yoo, Choon Ki Ahn, Myo Taeg Lim

Abstract:

Recent experimental evidences have shown that because of a fast convergence and a nice accuracy, neural networks training via extended Kalman filter (EKF) method is widely applied. However, as to an uncertainty of the system dynamics or modeling error, the performance of the method is unreliable. In order to overcome this problem in this paper, a new finite impulse response (FIR) filter based learning algorithm is proposed to train radial basis function neural networks (RBFN) for nonlinear function approximation. Compared to the EKF training method, the proposed FIR filter training method is more robust to those environmental conditions. Furthermore, the number of centers will be considered since it affects the performance of approximation.

Keywords: extended Kalman filter, classification problem, radial basis function networks (RBFN), finite impulse response (FIR) filter

Procedia PDF Downloads 458
1162 Using Machine Learning to Monitor the Condition of the Cutting Edge during Milling Hardened Steel

Authors: Pawel Twardowski, Maciej Tabaszewski, Jakub Czyżycki

Abstract:

The main goal of the work was to use machine learning to predict cutting-edge wear. The research was carried out while milling hardened steel with sintered carbide cutters at various cutting speeds. During the tests, cutting-edge wear was measured, and vibration acceleration signals were also measured. Appropriate measures were determined from the vibration signals and served as input data in the machine-learning process. Two approaches were used in this work. The first one involved a two-state classification of the cutting edge - suitable and unfit for further work. In the second approach, prediction of the cutting-edge state based on vibration signals was used. The obtained research results show that the appropriate use of machine learning algorithms gives excellent results related to monitoring cutting edge during the process.

Keywords: milling of hardened steel, tool wear, vibrations, machine learning

Procedia PDF Downloads 60
1161 The Need for Interdisciplinary Approach in Studying Archaeology: An Evolving Cultural Science

Authors: Inalegwu Stephany Akipu

Abstract:

Archaeology being the study of mans past using the materials he left behind has been argued to be classified under sciences while some scholars are of the opinion that it does not deserve the status of being referred to as ‘science’. However divergent the opinions of scholars may be on the classification of Archaeology as a science or in the humanities, the discipline has no doubt, greatly aided in shaping the history of man’s past. Through the different stages that the discipline has transgressed, it has encountered some challenges. This paper therefore, attempts to highlight the need for the inclusion of branches of other disciplines when using Archaeology in reconstructing man’s history. The objective of course, is to add to the existing body of knowledge but specifically to expose the incomparable importance of archaeology as a discipline and to place it on such a high scale that it will not be regulated to the background as is done in some Nigerian Universities. The paper attempts a clarification of some conceptual terms and discusses the developmental stages of archaeology. It further describes the present state of the discipline and concludes with the disciplines that need to be imbibed in the use of Archaeology which is an evolving cultural science to obtain the aforementioned interdisciplinary approach.

Keywords: archaeology, cultural, evolution, interdisciplinary, science

Procedia PDF Downloads 331
1160 Water Quality Assessment of Owu Falls for Water Use Classification

Authors: Modupe O. Jimoh

Abstract:

Waterfalls create an ambient environment for tourism and relaxation. They are also potential sources for water supply. Owu waterfall located at Isin Local Government, Kwara state, Nigeria is the highest waterfall in the West African region, yet none of its potential usefulness has been fully exploited. Water samples were taken from two sections of the fall and were analyzed for various water quality parameters. The results obtained include pH (6.71 ± 0.1), Biochemical oxygen demand (4.2 ± 0.5 mg/l), Chemical oxygen demand (3.07 ± 0.01 mg/l), Dissolved oxygen (6.59 ± 0.6 mg/l), Turbidity (4.43 ± 0.11 NTU), Total dissolved solids (8.2 ± 0.09 mg/l), Total suspended solids (18.25 ± 0.5 mg/l), Chloride ion (0.48 ± 0.08 mg/l), Calcium ion (0.82 ± 0.02 mg/l)), Magnesium ion (0.63 ± 0.03 mg/l) and Nitrate ion (1.25 ± 0.01 mg/l). The results were compared to the World Health Organisations standard for drinking water and the Nigerian standard for drinking water. From the comparison, it can be deduced that due to the Biochemical oxygen demand value, the water is not suitable for drinking unless it undergoes treatment. However, it is suitable for other classes of water usage.

Keywords: Owu falls, waterfall, water quality, water quality parameters, water use

Procedia PDF Downloads 181
1159 The Planning Criteria of Block-Unit Redevelopment to Improve Residential Environment: Focused on Redevelopment Project in Seoul

Authors: Hong-Nam Choi, Hyeong-Wook Song, Sungwan Hong, Hong-Kyu Kim

Abstract:

In Korea, elements that decide the quality of residential environment are not only diverse, but show deviation as well. However, people do not consider these elements and instead, they try to settle the uniformed style of residential environment, which focuses on the construction development of apartment housing and business based plans. Recently, block-unit redevelopment is becoming the standout alternative plan of standardize redevelopment projects, but constructions become inefficient because of indefinite planning criteria. In conclusion, the following research is about analyzing and categorizing the development method and legal ground of redevelopment project district, plan determinant and applicable standard. The purpose of this study is to become a basis in compatible analysis of planning standards that will happen in the future.

Keywords: shape restrictions, improvement of regulation, diversity of residential environment, classification of redevelopment project, planning criteria of redevelopment, special architectural district (SAD)

Procedia PDF Downloads 485