Search results for: conventionally manufacturing techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8336

Search results for: conventionally manufacturing techniques

6446 Effects of Auxetic Antibacterial Zwitterion Carboxylate and Sulfate Copolymer Hydrogels for Diabetic Wound Healing Application

Authors: Udayakumar Vee, Franck Quero

Abstract:

Zwitterionic polymers generally have been viewed as a new class of antimicrobial and non-fouling materials. They offer a broad versatility for chemical modification and hence great freedom for accurate molecular design, which bear an equimolar number of homogenously distributed anionic and cationic groups along their polymer chains. This study explores the effectiveness of the auxetic zwitterion carboxylate/sulfonate hydrogel in the diabetic-induced mouse model. A series of silver metal-doped auxetic zwitterion carboxylate/sulfonate/vinylaniline copolymer hydrogels is designed via a 3D printer. Zwitterion monomers have been characterized by FT-IR and NMR techniques. The effect of changing the monomers and different loading ratios of Ag over zwitterion on the final hydrogel materials' antimicrobial properties and biocompatibility will be investigated in detail. The synthesized auxetic hydrogel has been characterized using a wide range of techniques to help establish the relationship between molecular level and macroscopic properties of these materials, including mechanical and antibacterial and biocompatibility and wound healing ability. This work's comparative studies and results provide new insights and guide us in choosing a better auxetic structured material for a broad spectrum of wound healing applications in the animal model. We expect this approach to provide a versatile and robust platform for biomaterial design that could lead to promising treatments for wound healing applications.

Keywords: auxetic, zwitterion, carboxylate, sulfonate, polymer, wound healing

Procedia PDF Downloads 124
6445 A Grey-Box Text Attack Framework Using Explainable AI

Authors: Esther Chiramal, Kelvin Soh Boon Kai

Abstract:

Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.

Keywords: BERT, explainable AI, Grey-box text attack, transformer

Procedia PDF Downloads 123
6444 The Clustering of Multiple Sclerosis Subgroups through L2 Norm Multifractal Denoising Technique

Authors: Yeliz Karaca, Rana Karabudak

Abstract:

Multifractal Denoising techniques are used in the identification of significant attributes by removing the noise of the dataset. Magnetic resonance (MR) image technique is the most sensitive method so as to identify chronic disorders of the nervous system such as Multiple Sclerosis. MRI and Expanded Disability Status Scale (EDSS) data belonging to 120 individuals who have one of the subgroups of MS (Relapsing Remitting MS (RRMS), Secondary Progressive MS (SPMS), Primary Progressive MS (PPMS)) as well as 19 healthy individuals in the control group have been used in this study. The study is comprised of the following stages: (i) L2 Norm Multifractal Denoising technique, one of the multifractal technique, has been used with the application on the MS data (MRI and EDSS). In this way, the new dataset has been obtained. (ii) The new MS dataset obtained from the MS dataset and L2 Multifractal Denoising technique has been applied to the K-Means and Fuzzy C Means clustering algorithms which are among the unsupervised methods. Thus, the clustering performances have been compared. (iii) In the identification of significant attributes in the MS dataset through the Multifractal denoising (L2 Norm) technique using K-Means and FCM algorithms on the MS subgroups and control group of healthy individuals, excellent performance outcome has been yielded. According to the clustering results based on the MS subgroups obtained in the study, successful clustering results have been obtained in the K-Means and FCM algorithms by applying the L2 norm of multifractal denoising technique for the MS dataset. Clustering performance has been more successful with the MS Dataset (L2_Norm MS Data Set) K-Means and FCM in which significant attributes are obtained by applying L2 Norm Denoising technique.

Keywords: clinical decision support, clustering algorithms, multiple sclerosis, multifractal techniques

Procedia PDF Downloads 152
6443 Antibacterial Zwitterion Carboxylate and Sulfonate Copolymer Auxetic Hydrogels for Diabetic Wound Healing Application

Authors: Udayakumar Veerabagu, Franck Quero

Abstract:

Zwitterion carboxylate and sulfonate polymers generally have been viewed as a new class of antimicrobial and non-fouling materials. They offer a broad versatility for chemical modification and hence great freedom for accurate molecular design, which bear an equimolar number of homogenously distributed anionic and cationic groups along their polymer chains. This study explores the effectiveness of the auxetic zwitterion carboxylate/sulfonate hydrogel in the diabetic-induced mouse model. A series of silver metal-doped auxetic zwitterion carboxylate/sulfonate/vinylaniline copolymer hydrogels is designed via a 3D printer. Zwitterion monomers have been characterized by FT-IR and NMR techniques. The effect of changing the monomers and different loading ratios of Ag over zwitterion on the final hydrogel materials' antimicrobial properties and biocompatibility will be investigated in detail. The synthesized auxetic hydrogel has been characterized using a wide range of techniques to help establish the relationship between molecular level and macroscopic properties of these materials, including mechanical and antibacterial and biocompatibility and wound healing ability. This work's comparative studies and results provide new insights and guide us in choosing a better auxetic structured material for a broad spectrum of wound healing applications in the animal model. We expect this approach to provide a versatile and robust platform for biomaterial design that could lead to promising treatments for wound healing applications.

Keywords: auxetic, zwitterion, carboxylate, sulfonate, polymer, wound healing

Procedia PDF Downloads 137
6442 Pellegrini-Stieda Syndrome: A Physical Medicine and Rehabilitation Approach

Authors: Pedro Ferraz-Gameiro

Abstract:

Introduction: The Pellegrini-Stieda lesion is the result of post-traumatic calcification and/or ossification on the medial collateral ligament (MCL) of the knee. When this calcification is accompanied by gonalgia and limitation of knee flexion, it is called Pellegrini-Stieda syndrome. The pathogenesis is probably the calcification of a post-traumatic hematoma at least three weeks after the initial trauma or secondary to repetitive microtrauma. On anteroposterior radiographs, a Pellegrini-Stieda lesion is a linear vertical ossification or calcification of the proximal portion of the MCL and usually near the medial femoral condyle. Patients with Pellegrini-Stieda syndrome present knee pain associated with loss of range of motion. The treatment is usually conservative with analgesic and anti-inflammatory drugs, either systemic or intra-articular. Physical medicine and rehabilitation techniques associated with shock wave therapy can be a way of reduction of pain/inflammation. Patients who maintain instability with significant limitation of knee mobility may require surgical excision. Methods: Research was done using PubMed central using the terms Pellegrini-Stieda syndrome. Discussion/conclusion: Medical treatment is the rule, with initial rest, anti-inflammatory, and physiotherapy. If left untreated, this ossification can potentially form a significant bone mass, which can compromise the range of motion of the knee. Physical medicine and rehabilitation techniques associated with shock wave therapy are a way of reduction of pain/inflammation.

Keywords: knee, Pellegrini-Stieda syndrome, rehabilitation, shock waves therapy

Procedia PDF Downloads 118
6441 Optimum Design of Helical Gear System on Basis of Maximum Power Transmission Capability

Authors: Yasaman Esfandiari

Abstract:

Mechanical engineering has always dealt with amplification of the input power in power trains. One of the ways to achieve this goal is to use gears to change the amplitude and direction of the torque and the speed. However, the gears should be optimally designed to best achieve these objectives. In this study, helical gear systems are optimized to achieve maximum power. Material selection, space restriction, available facilities for manufacturing, the probability of tooth breakage, and tooth wear are taken into account and governing equations are derived. Finally, a Matlab code was generated to solve the optimization problem and the results are verified.

Keywords: design, gears, Matlab, optimization

Procedia PDF Downloads 231
6440 Synthesis, Structural, Spectroscopic and Nonlinear Optical Properties of New Picolinate Complex of Manganese (II) Ion

Authors: Ömer Tamer, Davut Avcı, Yusuf Atalay

Abstract:

Novel picolinate complex of manganese(II) ion, [Mn(pic)2] [pic: picolinate or 2-pyridinecarboxylate], was prepared and fully characterized by single crystal X-ray structure determination. The manganese(II) complex was characterized by FT-IR, FT-Raman and UV–Vis spectroscopic techniques. The C=O, C=N and C=C stretching vibrations were found to be strong and simultaneously active in IR and spectra. In order to support these experimental techniques, density functional theory (DFT) calculations were performed at Gaussian 09W. Although the supramolecular interactions have some influences on the molecular geometry in solid state phase, the calculated data show that the predicted geometries can reproduce the structural parameters. The molecular modeling and calculations of IR, Raman and UV-vis spectra were performed by using DFT levels. Nonlinear optical (NLO) properties of synthesized complex were evaluated by the determining of dipole moment (µ), polarizability (α) and hyperpolarizability (β). Obtained results demonstrated that the manganese(II) complex is a good candidate for NLO material. Stability of the molecule arising from hyperconjugative interactions and charge delocalization was analyzed using natural bond orbital (NBO) analysis. The highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO) which is also known the frontier molecular orbitals were simulated, and obtained energy gap confirmed that charge transfer occurs within manganese(II) complex. Molecular electrostatic potential (MEP) for synthesized manganese(II) complex displays the electrophilic and nucleophilic regions. From MEP, the the most negative region is located over carboxyl O atoms while positive region is located over H atoms.

Keywords: DFT, picolinate, IR, Raman, nonlinear optic

Procedia PDF Downloads 480
6439 Modern Pilgrimage Narratives and India’s Heterogeneity

Authors: Alan Johnson

Abstract:

This paper focuses on modern pilgrimage narratives about sites affiliated with Indian religious expressions located both within and outside India. The paper uses a multidisciplinary approach to examine poetry, personal essays, and online attestations of pilgrimage to illustrate how non-religious ideas coexist with outwardly religious ones, exemplifying a characteristically Indian form of syncretism that pre-dates Western ideas of pluralism. The paper argues that the syncretism on display in these modern creative works refutes the current exclusionary vision of India as a primordially Hindu-nationalist realm. A crucial premise of this argument is that the narrative’s intrinsic heteroglossia, so evident in India’s historically rich variety of stories and symbols, belies this reactionary version of Hindu nationalism. Equally important to this argument, therefore, is the vibrancy of Hindu sites outside India, such as the Batu Caves temple complex in Kuala Lumpur, Malaysia. The literary texts examined in this paper include, first, Arun Kolatkar’s famous 1976 collection of poems, titled Jejuri, about a visit to the pilgrimage site of the same name in Maharashtra. Here, the modern, secularized visitor from Bombay (Mumbai) contemplates the effect of the temple complex on himself and on the other, more worshipful visitors. Kolatkar’s modernist poems reflect the narrator’s typically modern-Indian ambivalence for holy ruins, for although they do not evoke a conventionally religious feeling in him, they nevertheless possess an aura of timelessness that questions the narrator’s time-conscious sensibility. The paper bookends Kolatkar’s Jejuri with considerations of an early-twentieth-century text, online accounts by visitors to the Batu Caves, and a recent, more conventional Hindu account of pilgrimage. For example, the pioneering graphic artist Mukul Chandra Dey published in 1917, My Pilgrimages to Ajanta and Bagh, in which he devotes an entire chapter to the life of the Buddha as a means of illustrating the layering of stories that is a characteristic feature of sacred sites in India. In a different but still syncretic register, Jawaharlal Nehru, India’s first prime minister, and a committed secularist proffers India’s ancient pilgrimage network as a template for national unity in his classic 1946 autobiography The Discovery of India. Narrative is the perfect vehicle for highlighting this layering of sensibilities, for a single text can juxtapose the pilgrim-narrator’s description with that of a far older pilgrimage, a juxtaposition that establishes an imaginative connection between otherwise distanced actors, and between them and the reader.

Keywords: India, literature, narrative, syncretism

Procedia PDF Downloads 143
6438 Scientific and Technical Basis for the Application of Textile Structures in Glass Using Pate De Verre Technique

Authors: Walaa Hamed Mohamed Hamza

Abstract:

Textile structures are the way in which the threading process of both thread and loom is done together to form the woven. Different methods of attaching the clothing and the flesh produce different textile structures, which differ in their surface appearance from each other, including so-called simple textile structures. Textile compositions are the basis of woven fabric, through which aesthetic values can be achieved in the textile industry by weaving threads of yarn with the weft at varying degrees that may reach the total control of one of the two groups on the other. Hence the idea of how art and design can be used using different textile structures under the modern techniques of pate de verre. In the creation of designs suitable for glass products employed in the interior architecture. The problem of research: The textile structures, in general, have a significant impact on the appearance of the fabrics in terms of form and aesthetic. How can we benefit from the characteristics of different textile compositions in different glass designs with different artistic values. The research achieves its goal by the investment of simple textile structures in innovative artistic designs using the pate de verre technique, as well as the use of designs resulting from the textile structures in the external architecture to add various aesthetic values. The importance of research in the revival of heritage using ancient techniques, as well as synergy between different fields of applied arts such as glass and textile, and also study the different and diverse effects resulting from each fabric composition and the possibility of use in various designs in the interior architecture. The research will be achieved that by investing in simple textile compositions, innovative artistic designs produced using pate de verre technology can be used in interior architecture.

Keywords: glass, interior architecture, pate de verre, textile structures

Procedia PDF Downloads 277
6437 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 372
6436 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 21
6435 Effects of Waist-to-Hip Ratio and Visceral Fat Measurements Improvement on Offshore Petrochemical Company Shift Employees' Work Efficiency

Authors: Essam Amerian

Abstract:

The aim of this study was to investigate the effects of improving waist-to-hip ratio (WHR) and visceral fat components on the health of shift workers in an offshore petrochemical company. A total of 100 male shift workers participated in the study, with an average age of 40.5 years and an average BMI of 28.2 kg/m². The study employed a randomized controlled trial design, with participants assigned to either an intervention group or a control group. The intervention group received a 12-week program that included dietary counseling, physical activity recommendations, and stress management techniques. The control group received no intervention. The outcomes measured were changes in WHR, visceral fat components, blood pressure, and lipid profile. The results showed that the intervention group had a statistically significant improvement in WHR (p<0.001) and visceral fat components (p<0.001) compared to the control group. Furthermore, there were statistically significant improvements in systolic blood pressure (p=0.015) and total cholesterol (p=0.034) in the intervention group compared to the control group. These findings suggest that implementing a 12-week program that includes dietary counseling, physical activity recommendations, and stress management techniques can effectively improve WHR, visceral fat components, and cardiovascular health among shift workers in an offshore petrochemical company.

Keywords: body composition, waist-hip-ratio, visceral fat, shift worker, work efficiency

Procedia PDF Downloads 63
6434 Different Processing Methods to Obtain a Carbon Composite Element for Cycling

Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso

Abstract:

The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.

Keywords: HP-RTM, carbon composites, cycling, FEM

Procedia PDF Downloads 117
6433 Wastewater Treatment by Modified Bentonite

Authors: Mecabih Zohra

Abstract:

Water is such an important element of many manufacturing processes which that use a big amount of chemical substances, It is likely to cause it contamination of water returning to rivers by industrial discharged. These contaminants can be a high in suspended solid and chemical oxygen demand. In this study, urban wastewater of sidi bel abbes city (Algeria) was treated by adsorption using modified bentonite from Magnia (Algeria) by conducting batch experiments to investigate its equilibrium characteristics and kinetics. Purified bentonite is characterized by; CEC, XRF, BET, FITR, XRD, SEM and 27Al spectroscopy. The results showed the removal of suspended solids exceeds 98.47% and COD up to 99.52%, and regarding of sorption efficiencies (qm), the maximum COD sorption efficiencies (qm) calculated using the Langmuir model is 156.23, 64.47 and 17.19 mg/g respectively, for a pH range of 4 to 9.

Keywords: adsorption, bentonite, COD, wastewater

Procedia PDF Downloads 64
6432 Wastewater Treatment by Modified Bentonite

Authors: Mecabih Zohra

Abstract:

Water is such an important element of many manufacturing processes which that use a big amount of chemical substances, It is likely to cause it contamination of water returning to rivers by industrial discharged. These contaminants can be a high in suspended solid and chemical oxygen demand. In this study, urban wastewater of sidi bel abbes city (Algeria) was treated by adsorption using modified bentonite from Magnia (Algeria) by conducting batch experiments to investigate its equilibrium characteristics and kinetics. Purified bentonite is characterized by; CEC, XRF, BET, FITR, XRD, SEM and 27Al spectroscopy. The results showed the removal of suspended solids exceeds 98.47% and COD up to 99.52%, and regarding of sorption efficiencies (qm), the maximum COD sorption efficiencies (qm) calculated using the Langmuir model is 156.23, 64.47 and 17.19 mg/g respectively, for a pH range of 4 to 9.

Keywords: adsorption, bentonite, COD, wastewater

Procedia PDF Downloads 69
6431 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 119
6430 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 118
6429 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 92
6428 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 407
6427 Effect of Post Circuit Resistance Exercise Glucose Feeding on Energy and Hormonal Indexes in Plasma and Lymphocyte in Free-Style Wrestlers

Authors: Miesam Golzadeh Gangraj, Younes Parvasi, Mohammad Ghasemi, Ahmad Abdi, Saeid Fazelifar

Abstract:

The purpose of the study was to determine the effect of glucose feeding on energy and hormonal indexes in plasma and lymphocyte immediately after wrestling – base techniques circuit exercise (WBTCE) in young male freestyle wrestlers. Sixteen wrestlers (weight = 75/45 ± 12/92 kg, age = 22/29 ± 0/90 years, BMI = 26/23 ± 2/64 kg/m²) were randomly divided into two groups: control (water), glucose (2 gr per kg body weight). Blood samples were obtained before, immediately, and 90 minutes of the post-exercise recovery period. Glucose (2 g/kg of body weight, 1W/5V) and water (equal volumes) solutions were given immediately after the second blood sampling. Data were analyzed by using an ANOVA (a repeated measure) and a suitable post hoc test (LSD). A significant decrease was observed in lymphocytes glycogen immediately after exercise (P < 0.001). In the experimental group, increase Lymphocyte glycogen concentration (P < 0.028) than in the control group in 90 min post-exercise. Plasma glucose concentrations increased in all groups immediately after exercise (P < 0.05). Plasma insulin concentrations in both groups decreased immediately after exercise, but at 90 min after exercise, its level was significantly increased only in glucose group (P < 0.001). Our results suggested that WBTCE protocol could be affected cellular energy sources and hormonal response. Furthermore, Glucose consumption can increase the lymphocyte glycogen and better energy within the cell.

Keywords: glucose feeding, lymphocyte, Wrestling – base techniques circuit , exercise

Procedia PDF Downloads 257
6426 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 123
6425 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 199
6424 Study on Clarification of the Core Technology in a Monozukuri Company

Authors: Nishiyama Toshiaki, Tadayuki Kyountani, Nguyen Huu Phuc, Shigeyuki Haruyama, Oke Oktavianty

Abstract:

It is important to clarify the company’s core technology in product development process to strengthen their power in providing technology that meets the customer requirement. QFD method is adopted to clarify the core technology through identifying the high element technologies that are related to the voice of customer, and offer the most delightful features for customer. AHP is used to determine the importance of evaluating factors. A case study was conducted by using this approach in Japan’s Monozukuri Company (so called manufacturing company) to clarify their core technology based on customer requirements.

Keywords: core technology, QFD, voices of customer, analysis procedure

Procedia PDF Downloads 369
6423 Sustainable Production of Tin Oxide Nanoparticles: Exploring Synthesis Techniques, Formation Mechanisms, and Versatile Applications

Authors: Yemane Tadesse Gebreslassie, Henok Gidey Gebretnsae

Abstract:

Nanotechnology has emerged as a highly promising field of research with wide-ranging applications across various scientific disciplines. In recent years, tin oxide has garnered significant attention due to its intriguing properties, particularly when synthesized in the nanoscale range. While numerous physical and chemical methods exist for producing tin oxide nanoparticles, these approaches tend to be costly, energy-intensive, and involve the use of toxic chemicals. Given the growing concerns regarding human health and environmental impact, there has been a shift towards developing cost-effective and environmentally friendly processes for tin oxide nanoparticle synthesis. Green synthesis methods utilizing biological entities such as plant extracts, bacteria, and natural biomolecules have shown promise in successfully producing tin oxide nanoparticles. However, scaling up the production to an industrial level using green synthesis approaches remains challenging due to the complexity of biological substrates, which hinders the elucidation of reaction mechanisms and formation processes. Thus, this review aims to provide an overview of the various sources of biological entities and methodologies employed in the green synthesis of tin oxide nanoparticles, as well as their impact on nanoparticle properties. Furthermore, this research delves into the strides made in comprehending the mechanisms behind the formation of nanoparticles as documented in existing literature. It also sheds light on the array of analytical techniques employed to investigate and elucidate the characteristics of these minuscule particles.

Keywords: nanotechnology, tin oxide, green synthesis, formation mechanisms

Procedia PDF Downloads 35
6422 Role of ICT and Wage Inequality in Organization

Authors: Shoji Katagiri

Abstract:

This study deals with wage inequality in organization and shows the relationship between ICT and wage in organization. To do so, we incorporate ICT’s factors in organization into our model. ICT’s factors are efficiencies of Enterprise Resource Planning (ERP), Computer Assisted Design/Computer Assisted Manufacturing (CAD/CAM), and NETWORK. The improvement of ICT’s factors decrease the learning cost to solve problem pertaining to the hierarchy in organization. The improvement of NETWORK increases the wage inequality within workers and decreases within managers and entrepreneurs. The improvements of CAD/CAM and ERP increases the wage inequality within all agent, and partially increase it between the agents in hierarchy.

Keywords: endogenous economic growth, ICT, inequality, capital accumulation

Procedia PDF Downloads 247
6421 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: computational fluid dynamics (CFD), erosion, slurry transportation, k-ε Model

Procedia PDF Downloads 395
6420 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique

Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck

Abstract:

In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.

Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking

Procedia PDF Downloads 68
6419 Design and Performance Improvement of Three-Dimensional Optical Code Division Multiple Access Networks with NAND Detection Technique

Authors: Satyasen Panda, Urmila Bhanja

Abstract:

In this paper, we have presented and analyzed three-dimensional (3-D) matrices of wavelength/time/space code for optical code division multiple access (OCDMA) networks with NAND subtraction detection technique. The 3-D codes are constructed by integrating a two-dimensional modified quadratic congruence (MQC) code with one-dimensional modified prime (MP) code. The respective encoders and decoders were designed using fiber Bragg gratings and optical delay lines to minimize the bit error rate (BER). The performance analysis of the 3D-OCDMA system is based on measurement of signal to noise ratio (SNR), BER and eye diagram for a different number of simultaneous users. Also, in the analysis, various types of noises and multiple access interference (MAI) effects were considered. The results obtained with NAND detection technique were compared with those obtained with OR and AND subtraction techniques. The comparison results proved that the NAND detection technique with 3-D MQC\MP code can accommodate more number of simultaneous users for longer distances of fiber with minimum BER as compared to OR and AND subtraction techniques. The received optical power is also measured at various levels of BER to analyze the effect of attenuation.

Keywords: Cross Correlation (CC), Three dimensional Optical Code Division Multiple Access (3-D OCDMA), Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA), Multiple Access Interference (MAI), Phase Induced Intensity Noise (PIIN), Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

Procedia PDF Downloads 401
6418 Improvement of Sleep Quality Through Manual and Non-Pharmacological Treatment

Authors: Andreas Aceranti, Sergio Romanò, Simonetta Vernocchi, Silvia Arnaboldi, Emilio Mazza

Abstract:

As a result of the Sars-Cov2 pandemic, the incidence of thymism disorders has significantly increased and, often, patients are reluctant to want to take drugs aimed at stabilizing mood. In order to provide an alternative approach to drug therapies, we have prepared a study in order to evaluate the possibility of improving the quality of life of these subjects through osteopathic treatment. Patients were divided into visceral and fascial manual treatment with the aim of increasing serotonin levels and stimulating the vagus nerve through validated techniques. The results were evaluated through the administration of targeted questionnaires in order to assess quality of life, mood, sleep and intestinal functioning. At a first endpoint we found, in patients undergoing fascial treatment, an increase in quality of life and sleep: in fact, they report a decrease in the number of nocturnal awakenings; a reduction in falling asleep times and greater rest upon waking. In contrast, patients undergoing visceral treatment, as well as those included in the control group, did not show significant improvements. Patients in the fascial group have, in fact, reported an improvement in thymism and subjective quality of life with a generalized improvement in function. Although the study is still ongoing, based on the results of the first endpoint we can hypothesize that fascial stimulation of the vagus nerve with manual and osteopathic techniques may be a valid alternative to pharmacological treatments in mood and sleep disorders.

Keywords: ostheopathy, insomnia, noctural awakening, thymism

Procedia PDF Downloads 70
6417 Satellite Data to Understand Changes in Carbon Dioxide for Surface Mining and Green Zone

Authors: Carla Palencia-Aguilar

Abstract:

In order to attain the 2050’s zero emissions goal, it is necessary to know the carbon dioxide changes over time either from pollution to attenuations in the mining industry versus at green zones to establish real goals and redirect efforts to reduce greenhouse effects. Two methods were used to compute the amount of CO2 tons in specific mining zones in Colombia. The former by means of NPP with MODIS MOD17A3HGF from years 2000 to 2021. The latter by using MODIS MYD021KM bands 33 to 36 with maximum values of 644 data points distributed in 7 sites corresponding to surface mineral mining of: coal, nickel, iron and limestone. The green zones selected were located at the proximities of the studied sites, but further than 1 km to avoid information overlapping. Year 2012 was selected for method 2 to compare the results with data provided by the Colombian government to determine range of values. Some data was compared with 2022 MODIS energy values and converted to kton of CO2 by using the Greenhouse Gas Equivalencies Calculator by EPA. The results showed that Nickel mining was the least pollutant with 81 kton of CO2 e.q on average and maximum of 102 kton of CO2 e.q. per year, with green zones attenuating carbon dioxide in 103 kton of CO2 on average and 125 kton maximum per year in the last 22 years. Following Nickel, there was Coal with average kton of CO2 per year of 152 and maximum of 188, values very similar to the subjacent green zones with average and maximum kton of CO2 of 157 and 190 respectively. Iron had similar results with respect to 3 Limestone sites with average values of 287 kton of CO2 for mining and 310 kton for green zones, and maximum values of 310 kton for iron mining and 356 kton for green zones. One of the limestone sites exceeded the other sites with an average value of 441 kton per year and maximum of 490 kton per year, eventhough it had higher attenuation by green zones than a close Limestore site (3.5 Km apart): 371 kton versus 281 kton on average and maximum 416 kton versus 323 kton, such vegetation contribution is not enough, meaning that manufacturing process should be improved for the most pollutant site. By comparing bands 33 to 36 for years 2012 and 2022 from January to August, it can be seen that on average the kton of CO2 were similar for mining sites and green zones; showing an average yearly balance of carbon dioxide emissions and attenuation. However, efforts on improving manufacturing process are needed to overcome the carbon dioxide effects specially during emissions’ peaks because surrounding vegetation cannot fully attenuate it.

Keywords: carbon dioxide, MODIS, surface mining, vegetation

Procedia PDF Downloads 87