Search results for: axial flux induction machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4631

Search results for: axial flux induction machine

701 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector

Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.

Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation

Procedia PDF Downloads 139
700 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 586
699 Investigating the Determinants and Growth of Financial Technology Depth of Penetration among the Heterogeneous Africa Economies

Authors: Tochukwu Timothy Okoli, Devi Datt Tewari

Abstract:

The high rate of Fintech adoption has not transmitted to greater financial inclusion and development in Africa. This problem is attributed to poor Fintech diversification and usefulness in the continent. This concept is referred to as the Fintech depth of penetration in this study. The study, therefore, assessed its determinants and growth process in a panel of three emergings, twenty-four frontiers and five fragile African economies disaggregated with dummies over the period 2004-2018 to allow for heterogeneity between groups. The System Generalized Method of Moments (GMM) technique reveals that the average depth of Mobile banking and automated teller machine (ATM) is a dynamic heterogeneity process. Moreover, users' previous experiences/compatibility, trial-ability/income, and financial development were the major factors that raise its usefulness, whereas perceived risk, financial openness, and inflation rate significantly limit its usefulness. The growth rate of Mobile banking, ATM, and Internet banking in 2018 is, on average 41.82, 0.4, and 20.8 per cent respectively greater than its average rates in 2004. These greater averages after the 2009 financial crisis suggest that countries resort to Fintech as a risk-mitigating tool. This study, therefore, recommends greater Fintech diversification through improved literacy, institutional development, financial liberalization, and continuous innovation.

Keywords: depth of fintech, emerging Africa, financial technology, internet banking, mobile banking

Procedia PDF Downloads 131
698 Stress Concentration Trend for Combined Loading Conditions

Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo

Abstract:

Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.

Keywords: stress concentration, finite element analysis, finite element models, combined loading

Procedia PDF Downloads 444
697 Low- and High-Temperature Methods of CNTs Synthesis for Medicine

Authors: Grzegorz Raniszewski, Zbigniew Kolacinski, Lukasz Szymanski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza

Abstract:

One of the most promising area for carbon nanotubes (CNTs) application is medicine. One of the most devastating diseases is cancer. Carbon nanotubes may be used as carriers of a slowly released drug. It is possible to use of electromagnetic waves to destroy cancer cells by the carbon nanotubes (CNTs). In our research we focused on thermal ablation by ferromagnetic carbon nanotubes (Fe-CNTs). In the cancer cell hyperthermia functionalized carbon nanotubes are exposed to radio frequency electromagnetic field. Properly functionalized Fe-CNTs join the cancer cells. Heat generated in nanoparticles connected to nanotubes warm up nanotubes and then the target tissue. When the temperature in tumor tissue exceeds 316 K the necrosis of cancer cells may be observed. Several techniques can be used for Fe-CNTs synthesis. In our work, we use high-temperature methods where arc-discharge is applied. Low-temperature systems are microwave plasma with assisted chemical vapor deposition (MPCVD) and hybrid physical-chemical vapor deposition (HPCVD). In the arc discharge system, the plasma reactor works with a pressure of He up to 0,5 atm. The electric arc burns between two graphite rods. Vapors of carbon move from the anode, through a short arc column and forms CNTs which can be collected either from the reactor walls or cathode deposit. This method is suitable for the production of multi-wall and single-wall CNTs. A disadvantage of high-temperature methods is a low purification, short length, random size and multi-directional distribution. In MPCVD system plasma is generated in waveguide connected to the microwave generator. Then containing carbon and ferromagnetic elements plasma flux go to the quartz tube. The additional resistance heating can be applied to increase the reaction effectiveness and efficiency. CNTs nucleation occurs on the quartz tube walls. It is also possible to use substrates to improve carbon nanotubes growth. HPCVD system involves both chemical decomposition of carbon containing gases and vaporization of a solid or liquid source of catalyst. In this system, a tube furnace is applied. A mixture of working and carbon-containing gases go through the quartz tube placed inside the furnace. As a catalyst ferrocene vapors can be used. Fe-CNTs may be collected then either from the quartz tube walls or on the substrates. Low-temperature methods are characterized by higher purity product. Moreover, carbon nanotubes from tested CVD systems were partially filled with the iron. Regardless of the method of Fe-CNTs synthesis the final product always needs to be purified for applications in medicine. The simplest method of purification is an oxidation of the amorphous carbon. Carbon nanotubes dedicated for cancer cell thermal ablation need to be additionally treated by acids for defects amplification on the CNTs surface what facilitates biofunctionalization. Application of ferromagnetic nanotubes for cancer treatment is a promising method of fighting with cancer for the next decade. Acknowledgment: The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013

Keywords: arc discharge, cancer, carbon nanotubes, CVD, thermal ablation

Procedia PDF Downloads 450
696 Gut Microbial Dynamics in a Mouse Model of Inflammation-Linked Carcinogenesis as a Result of Diet Supplementation with Specific Mushroom Extracts

Authors: Alvarez M., Chapela M. J., Balboa E., Rubianes D., Sinde E., Fernandez de Ana C., Rodríguez-Blanco A.

Abstract:

The gut microbiota plays an important role as gut inflammation could contribute to colorectal cancer development; however, this role is still not fully understood, and tools able to prevent this progression are yet to be developed. The main objective of this study was to monitor the effects of a mushroom extracts formulation in gut microbial community composition of an Azoxymethane (AOM)/Dextran sodium sulfate (DSS) mice model of inflammation-linked carcinogenesis. For the in vivo study, 41 adult male mice of the C57BL / 6 strain were obtained. 36 of them have been induced in a state of colon carcinogenesis by a single intraperitoneal administration of AOM at a dose of 12.5 mg/kg; the control group animals received instead of the same volume of 0.9% saline. DSS is an extremely toxic polysaccharide sulfate that causes chronic inflammation of the colon mucosa, favoring the appearance of severe colitis and the production of tumors induced by AOM. Induction by AOM/DSS is an interesting platform for chemopreventive intervention studies. This time the model was used to monitor gut microbiota changes as a result of supplementation with a specific mushroom extracts formulation previously shown to have prebiotic activity. The animals have been divided into three groups: (i) Cancer + mushroom extracts formulation experimental group: to which the MicoDigest2.0 mushroom extracts formulation developed by Hifas da Terra S.L has been administered dissolved in drinking water at an estimated concentration of 100 mg / ml. (ii) Control group of animals with Cancer: to which normal water has been administered without any type of treatment. (iii) Control group of healthy animals: these are the animals that have not been induced cancer or have not received any treatment in drinking water. This treatment has been maintained for a period of 3 months, after which the animals were sacrificed to obtain tissues that were subsequently analyzed to verify the effects of the mushroom extract formulation. A microbiological analysis has been carried out to compare the microbial communities present in the intestines of the mice belonging to each of the study groups. For this, the methodology of massive sequencing by molecular analysis of the 16S gene has been used (Ion Torrent technology). Initially, DNA extraction and metagenomics libraries were prepared using the 16S Metagenomics kit, always following the manufacturer's instructions. This kit amplifies 7 of the 9 hypervariable regions of the 16S gene that will then be sequenced. Finally, the data obtained will be compared with a database that makes it possible to determine the degree of similarity of the sequences obtained with a wide range of bacterial genomes. Results obtained showed that, similarly to certain natural compounds preventing colorectal tumorigenesis, a mushroom formulation enriched the Firmicutes and Proteobacteria phyla and depleted Bacteroidetes. Therefore, it was demonstrated that the consumption of the mushroom extracts’ formulation developed could promote the recovery of the microbial balance that is disrupted in the mice model of carcinogenesis. More preclinical and clinical studies are needed to validate this promising approach.

Keywords: carcinogenesis, microbiota, mushroom extracts, inflammation

Procedia PDF Downloads 150
695 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance

Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.

Abstract:

The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, Philippines

Keywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure

Procedia PDF Downloads 103
694 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 438
693 Study of Nucleation and Growth Processes of Ettringite in Supersaturated Diluted Solutions

Authors: E. Poupelloz, S. Gauffinet

Abstract:

Ettringite Ca₆Al₂(SO₄)₃(OH)₁₂26H₂O is one of the major hydrates formed during cement hydration. Ettringite forms in Portland cement from the reaction between tricalcium aluminate Ca₃Al₂O₆ and calcium sulfate. Ettringite is also present in calcium sulfoaluminate cement in which it is the major hydrate, formed by the reaction between yeelimite Ca₄(AlO₂)₆SO₄ and calcium sulfate. About the formation of ettringite, numerous results are available in the literature even if some issues are still under discussion. However, almost all published work about ettringite was done on cementitious systems. Yet in cement, hydration reactions are very complex, the result of dissolution-precipitation processes and are submitted to various interactions. Understanding the formation process of a phase alone, here ettringite, is the first step to later understand the much more complex reactions happening in cement. This study is crucial for the comprehension of early cement hydration and physical behavior. Indeed formation of hydrates, in particular, ettringite, will have an influence on the rheological properties of the cement paste and on the need for admixtures. To make progress toward the understanding of existing phenomena, a specific study of nucleation and growth processes of ettringite was conducted. First ettringite nucleation was studied in ionic aqueous solutions, with controlled but different experimental conditions, as different supersaturation degrees (β), different pH or presence of exogenous ions. Through induction time measurements, interfacial ettringite crystals solution energies (γ) were determined. Growth of ettringite in supersaturated solutions was also studied through chain crystallization reactions. Specific BET surface area measurements and Scanning Electron Microscopy observations seemed to prove that growth process is favored over the nucleation process when ettringite crystals are initially present in a solution with a low supersaturation degree. The influence of stirring on ettringite formation was also investigated. Observation was made that intensity and nature of stirring have a high influence on the size of ettringite needles formed. Needle sizes vary from less than 10µm long depending on the stirring to almost 100µm long without any stirring. During all previously mentioned experiments, initially present ions are consumed to form ettringite in such a way that the supersaturation degree with regard to ettringite is decreasing over time. To avoid this phenomenon a device compensating the drop of ion concentrations by adding some more solutions, and therefore always have constant ionic concentrations, was used. This constant β recreates the conditions of the beginning of cement paste hydration, when the dissolution of solid reagents compensates the consumption of ions to form hydrates. This device allowed the determination of the ettringite precipitation rate as a function of the supersaturation degree β. Taking samples at different time during ettringite precipitation and doing BET measurements allowed the determination of the interfacial growth rate of ettringite in m²/s. This work will lead to a better understanding and control of ettringite formation alone and thus during cements hydration. This study will also ultimately define the impact of ettringite formation process on the rheology of cement pastes at early age, which is a crucial parameter from a practical point of view.

Keywords: cement hydration, ettringite, morphology of crystals, nucleation-growth process

Procedia PDF Downloads 129
692 Hand Gesture Interface for PC Control and SMS Notification Using MEMS Sensors

Authors: Keerthana E., Lohithya S., Harshavardhini K. S., Saranya G., Suganthi S.

Abstract:

In an epoch of expanding human-machine interaction, the development of innovative interfaces that bridge the gap between physical gestures and digital control has gained significant momentum. This study introduces a distinct solution that leverages a combination of MEMS (Micro-Electro-Mechanical Systems) sensors, an Arduino Mega microcontroller, and a PC to create a hand gesture interface for PC control and SMS notification. The core of the system is an ADXL335 MEMS accelerometer sensor integrated with an Arduino Mega, which communicates with a PC via a USB cable. The ADXL335 provides real-time acceleration data, which is processed by the Arduino to detect specific hand gestures. These gestures, such as left, right, up, down, or custom patterns, are interpreted by the Arduino, and corresponding actions are triggered. In the context of SMS notifications, when a gesture indicative of a new SMS is recognized, the Arduino relays this information to the PC through the serial connection. The PC application, designed to monitor the Arduino's serial port, displays these SMS notifications in the serial monitor. This study offers an engaging and interactive means of interfacing with a PC by translating hand gestures into meaningful actions, opening up opportunities for intuitive computer control. Furthermore, the integration of SMS notifications adds a practical dimension to the system, notifying users of incoming messages as they interact with their computers. The use of MEMS sensors, Arduino, and serial communication serves as a promising foundation for expanding the capabilities of gesture-based control systems.

Keywords: hand gestures, multiple cables, serial communication, sms notification

Procedia PDF Downloads 71
691 Visual Speech Perception of Arabic Emphatics

Authors: Maha Saliba Foster

Abstract:

Speech perception has been recognized as a bi-sensory process involving the auditory and visual channels. Compared to the auditory modality, the contribution of the visual signal to speech perception is not very well understood. Studying how the visual modality affects speech recognition can have pedagogical implications in second language learning, as well as clinical application in speech therapy. The current investigation explores the potential effect of speech visual cues on the perception of Arabic emphatics (AEs). The corpus consists of 36 minimal pairs each containing two contrasting consonants, an AE versus a non-emphatic (NE). Movies of four Lebanese speakers were edited to allow perceivers to have partial view of facial regions: lips only, lips-cheeks, lips-chin, lips-cheeks-chin, lips-cheeks-chin-neck. In the absence of any auditory information and relying solely on visual speech, perceivers were above chance at correctly identifying AEs or NEs across vowel contexts; moreover, the models were able to predict the probability of perceivers’ accuracy in identifying some of the COIs produced by certain speakers; additionally, results showed an overlap between the measurements selected by the computer and those selected by human perceivers. The lack of significant face effect on the perception of AEs seems to point to the lips, present in all of the videos, as the most important and often sufficient facial feature for emphasis recognition. Future investigations will aim at refining the analyses of visual cues used by perceivers by using Principal Component Analysis and including time evolution of facial feature measurements.

Keywords: Arabic emphatics, machine learning, speech perception, visual speech perception

Procedia PDF Downloads 307
690 A Five-Year Experience of Intensity Modulated Radiotherapy in Nasopharyngeal Carcinomas in Tunisia

Authors: Omar Nouri, Wafa Mnejja, Fatma Dhouib, Syrine Zouari, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Leila Farhat, Nejla Fourati, Jamel Daoud

Abstract:

Purpose and Objective: Intensity modulated radiation (IMRT) technique, associated with induction chemotherapy (IC) and/or concomitant chemotherapy (CC), is actually the recommended treatment modality for nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the therapeutic results and the patterns of relapse with this treatment protocol. Material and methods: A retrospective monocentric study of 145 patients with NPC treated between June 2016 and July 2021. All patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. The high-risk volume dose was 66.5 Gy in children. Survival analysis was performed according to the Kaplan-Meier method, and the Log-rank test was used to compare factors that may influence survival. Results: Median age was 48 years (11-80) with a sex ratio of 2.9. One hundred-twenty tumors (82.7%) were classified as stages III-IV according to the 2017 UICC TNM classification. Ten patients (6.9%) were metastatic at diagnosis. One hundred-thirty-five patient (93.1%) received IC, 104 of which (77%) were TPF-based (taxanes, cisplatin and 5 fluoro-uracil). One hundred-thirty-eight patient (95.2%) received CC, mostly cisplatin in 134 cases (97%). After a median follow-up of 50 months [22-82], 46 patients (31.7%) had a relapse: 12 (8.2%) experienced local and/or regional relapse after a median of 18 months [6-43], 29 (20%) experienced distant relapse after a median of 9 months [2-24] and 5 patients (3.4%) had both. Thirty-five patients (24.1%) died, including 5 (3.4%) from a cause other than their cancer. Three-year overall survival (OS), cancer specific survival, disease free survival, metastasis free survival and loco-regional free survival were respectively 78.1%, 81.3%, 67.8%, 74.5% and 88.1%. Anatomo-clinic factors predicting OS were age > 50 years (88.7 vs. 70.5%; p=0.004), diabetes history (81.2 vs. 66.7%; p=0.027), UICC N classification (100 vs. 95 vs. 77.5 vs. 68.8% respectively for N0, N1, N2 and N3; p=0.008), the practice of a lymph node biopsy (84.2 vs. 57%; p=0.05), and UICC TNM stages III-IV (93.8 vs. 73.6% respectively for stage I-II vs. III-IV; p=0.044). Therapeutic factors predicting OS were a number of CC courses (less than 4 courses: 65.8 vs. 86%; p=0.03, less than 5 courses: 71.5 vs. 89%; p=0.041), a weight loss > 10% during treatment (84.1 vs. 60.9%; p=0.021) and a total cumulative cisplatin dose, including IC and CC, < 380 mg/m² (64.4 vs. 87.6%; p=0.003). Radiotherapy delay and total duration did not significantly affect OS. No grade 3-4 late side effects were noted in the evaluable 127 patients (87.6%). The most common toxicity was dry mouth which was grade 2 in 47 cases (37%) and grade 1 in 55 cases (43.3%).Conclusion: IMRT for nasopharyngeal carcinoma granted a high loco-regional control rate for patients during the last five years. However, distant relapses remain frequent and conditionate the prognosis. We identified many anatomo-clinic and therapeutic prognosis factors. Therefore, high-risk patients require a more aggressive therapeutic approach, such as radiotherapy dose escalation or adding adjuvant chemotherapy.

Keywords: therapeutic results, prognostic factors, intensity-modulated radiotherapy, nasopharyngeal carcinoma

Procedia PDF Downloads 65
689 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 106
688 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations

Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay

Abstract:

Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.

Keywords: machining, milling operation, tool condition monitoring, tool wear prediction

Procedia PDF Downloads 303
687 A Deep Learning Approach to Online Social Network Account Compromisation

Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang

Abstract:

The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.

Keywords: computer security, network security, online social network, account compromisation

Procedia PDF Downloads 119
686 Scenarios of Digitalization and Energy Efficiency in the Building Sector in Brazil: 2050 Horizon

Authors: Maria Fatima Almeida, Rodrigo Calili, George Soares, João Krause, Myrthes Marcele Dos Santos, Anna Carolina Suzano E. Silva, Marcos Alexandre Da

Abstract:

In Brazil, the building sector accounts for 1/6 of energy consumption and 50% of electricity consumption. A complex sector with several driving actors plays an essential role in the country's economy. Currently, the digitalization readiness in this sector is still low, mainly due to the high investment costs and the difficulty of estimating the benefits of digital technologies in buildings. Nevertheless, the potential contribution of digitalization for increasing energy efficiency in the building sector in Brazil has been pointed out as relevant in the political and sectoral contexts, both in the medium and long-term horizons. To contribute to the debate on the possible evolving trajectories of digitalization in the building sector in Brazil and to subsidize the formulation or revision of current public policies and managerial decisions, three future scenarios were created to anticipate the potential energy efficiency in the building sector in Brazil due to digitalization by 2050. This work aims to present these scenarios as a basis to foresight the potential energy efficiency in this sector, according to different digitalization paces - slow, moderate, or fast in the 2050 horizon. A methodological approach was proposed to create alternative prospective scenarios, combining the Global Business Network (GBN) and the Laboratory for Investigation in Prospective Strategy and Organisation (LIPSOR) methods. This approach consists of seven steps: (i) definition of the question to be foresighted and time horizon to be considered (2050); (ii) definition and classification of a set of key variables, using the prospective structural analysis; (iii) identification of the main actors with an active role in the digital and energy spheres; (iv) characterization of the current situation (2021) and identification of main uncertainties that were considered critical in the development of alternative future scenarios; (v) scanning possible futures using morphological analysis; (vi) selection and description of the most likely scenarios; (vii) foresighting the potential energy efficiency in each of the three scenarios, namely slow digitalization; moderate digitalization, and fast digitalization. Each scenario begins with a core logic and then encompasses potentially related elements, including potential energy efficiency. Then, the first scenario refers to digitalization at a slow pace, with induction by the government limited to public buildings. In the second scenario, digitalization is implemented at a moderate pace, induced by the government in public, commercial, and service buildings, through regulation integrating digitalization and energy efficiency mechanisms. Finally, in the third scenario, digitalization in the building sector is implemented at a fast pace in the country and is strongly induced by the government, but with broad participation of private investments and accelerated adoption of digital technologies. As a result of the slow pace of digitalization in the sector, the potential for energy efficiency stands at levels below 10% of the total of 161TWh by 2050. In the moderate digitalization scenario, the potential reaches 20 to 30% of the total 161TWh by 2050. Furthermore, in the rapid digitalization scenario, it will reach 30 to 40% of the total 161TWh by 2050.

Keywords: building digitalization, energy efficiency, scenario building, prospective structural analysis, morphological analysis

Procedia PDF Downloads 116
685 The Curvature of Bending Analysis and Motion of Soft Robotic Fingers by Full 3D Printing with MC-Cells Technique for Hand Rehabilitation

Authors: Chaiyawat Musikapan, Ratchatin Chancharoen, Saknan Bongsebandhu-Phubhakdi

Abstract:

For many recent years, soft robotic fingers were used for supporting the patients who had survived the neurological diseases that resulted in muscular disorders and neural network damages, such as stroke and Parkinson’s disease, and inflammatory symptoms such as De Quervain and trigger finger. Generally, the major hand function is significant to manipulate objects in activities of daily living (ADL). In this work, we proposed the model of soft actuator that manufactured by full 3D printing without the molding process and one material for use. Furthermore, we designed the model with a technique of multi cavitation cells (MC-Cells). Then, we demonstrated the curvature bending, fluidic pressure and force that generated to the model for assistive finger flexor and hand grasping. Also, the soft actuators were characterized in mathematics solving by the length of chord and arc length. In addition, we used an adaptive push-button switch machine to measure the force in our experiment. Consequently, we evaluated biomechanics efficiency by the range of motion (ROM) that affected to metacarpophalangeal joint (MCP), proximal interphalangeal joint (PIP) and distal interphalangeal joint (DIP). Finally, the model achieved to exhibit the corresponding fluidic pressure with force and ROM to assist the finger flexor and hand grasping.

Keywords: biomechanics efficiency, curvature bending, hand functional assistance, multi cavitation cells (MC-Cells), range of motion (ROM)

Procedia PDF Downloads 262
684 Investor Sentiment and Satisfaction in Automated Investment: A Sentimental Analysis of Robo-Advisor Platforms

Authors: Vertika Goswami, Gargi Sharma

Abstract:

The rapid evolution of fintech has led to the rise of robo-advisor platforms that utilize artificial intelligence (AI) and machine learning to offer personalized investment solutions efficiently and cost-effectively. This research paper conducts a comprehensive sentiment analysis of investor experiences with these platforms, employing natural language processing (NLP) and sentiment classification techniques. The study investigates investor perceptions, engagement, and satisfaction, identifying key drivers of positive sentiment such as clear communication, low fees, consistent returns, and robust security. Conversely, negative sentiment is linked to issues like inconsistent performance, hidden fees, poor customer support, and a lack of transparency. The analysis reveals that addressing these pain points—through improved transparency, enhanced customer service, and ongoing technological advancements—can significantly boost investor trust and satisfaction. This paper contributes valuable insights into the fields of behavioral finance and fintech innovation, offering actionable recommendations for stakeholders, practitioners, and policymakers. Future research should explore the long-term impact of these factors on investor loyalty, the role of emerging technologies, and the effects of ethical investment choices and regulatory compliance on investor sentiment.

Keywords: artificial intelligence in finance, automated investment, financial technology, investor satisfaction, investor sentiment, robo-advisors, sentimental analysis

Procedia PDF Downloads 21
683 InAs/GaSb Superlattice Photodiode Array ns-Response

Authors: Utpal Das, Sona Das

Abstract:

InAs/GaSb type-II superlattice (T2SL) Mid-wave infrared (MWIR) focal plane arrays (FPAs) have recently seen rapid development. However, in small pixel size large format FPAs, the occurrence of high mesa sidewall surface leakage current is a major constraint necessitating proper surface passivation. A simple pixel isolation technique in InAs/GaSb T2SL detector arrays without the conventional mesa etching has been proposed to isolate the pixels by forming a more resistive higher band gap material from the SL, in the inter-pixel region. Here, a single step femtosecond (fs) laser anneal of the T2SL structure of the inter-pixel T2SL regions, have been used to increase the band gap between the pixels by QW-intermixing and hence increase isolation between the pixels. The p-i-n photodiode structure used here consists of a 506nm, (10 monolayer {ML}) InAs:Si (1x10¹⁸cm⁻³)/(10ML) GaSb SL as the bottom n-contact layer grown on an n-type GaSb substrate. The undoped absorber layer consists of 1.3µm, (10ML)InAs/(10ML)GaSb SL. The top p-contact layer is a 63nm, (10ML)InAs:Be(1x10¹⁸cm⁻³)/(10ML)GaSb T2SL. In order to improve the carrier transport, a 126nm of graded doped (10ML)InAs/(10ML)GaSb SL layer was added between the absorber and each contact layers. A 775nm 150fs-laser at a fluence of ~6mJ/cm² is used to expose the array where the pixel regions are masked by a Ti(200nm)-Au(300nm) cap. Here, in the inter-pixel regions, the p+ layer have been reactive ion etched (RIE) using CH₄+H₂ chemistry and removed before fs-laser exposure. The fs-laser anneal isolation improvement in 200-400μm pixels due to spatially selective quantum well intermixing for a blue shift of ~70meV in the inter-pixel regions is confirmed by FTIR measurements. Dark currents are measured between two adjacent pixels with the Ti(200nm)-Au(300nm) caps used as contacts. The T2SL quality in the active photodiode regions masked by the Ti-Au cap is hardly affected and retains the original quality of the detector. Although, fs-laser anneal of p+ only etched p-i-n T2SL diodes show a reduction in the reverse dark current, no significant improvement in the full RIE-etched mesa structures is noticeable. Hence for a 128x128 array fabrication of 8μm square pixels and 10µm pitch, SU8 polymer isolation after RIE pixel delineation has been used. X-n+ row contacts and Y-p+ column contacts have been used to measure the optical response of the individual pixels. The photo-response of these 8μm and other 200μm pixels under a 2ns optical pulse excitation from an Optical-Parametric-Oscillator (OPO), shows a peak responsivity of ~0.03A/W and 0.2mA/W, respectively, at λ~3.7μm. Temporal response of this detector array is seen to have a fast response ~10ns followed typical slow decay with ringing, attributed to impedance mismatch of the connecting co-axial cables. In conclusion, response times of a few ns have been measured in 8µm pixels of a 128x128 array. Although fs-laser anneal has been found to be useful in increasing the inter-pixel isolation in InAs/GaSb T2SL arrays by QW inter-mixing, it has not been found to be suitable for passivation of full RIE etched mesa structures with vertical walls on InAs/GaSb T2SL.

Keywords: band-gap blue-shift, fs-laser-anneal, InAs/GaSb T2SL, Inter-pixel isolation, ns-Response, photodiode array

Procedia PDF Downloads 153
682 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 246
681 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 120
680 Twitter Ego Networks and the Capital Markets: A Social Network Analysis Perspective of Market Reactions to Earnings Announcement Events

Authors: Gregory D. Saxton

Abstract:

Networks are everywhere: lunch ties among co-workers, golfing partnerships among employees, interlocking board-of-director connections, Facebook friendship ties, etc. Each network varies in terms of its structure -its size, how inter-connected network members are, and the prevalence of sub-groups and cliques. At the same time, within any given network, some network members will have a more important, more central position on account of their greater number of connections or their capacity as “bridges” connecting members of different network cliques. The logic of network structure and position is at the heart of what is known as social network analysis, and this paper applies this logic to the study of the stock market. Using an array of data analytics and machine learning tools, this study will examine 17 million Twitter messages discussing the stocks of the firms in the S&P 1,500 index in 2018. Each of these 1,500 stocks has a distinct Twitter discussion network that varies in terms of core network characteristics such as size, density, influence, norms and values, level of activity, and embedded resources. The study’s core proposition is that the ultimate effect of any market-relevant information is contingent on the characteristics of the network through which it flows. To test this proposition, this study operationalizes each of the core network characteristics and examines their influence on market reactions to 2018 quarterly earnings announcement events.

Keywords: data analytics, investor-to-investor communication, social network analysis, Twitter

Procedia PDF Downloads 123
679 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 286
678 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass

Authors: Goodness Onwuka, Khaled Abou-El-Hossein

Abstract:

Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.

Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding

Procedia PDF Downloads 305
677 Bone Mineral Density and Trabecular Bone Score in Ukrainian Men with Obesity

Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Dzerovych, Roksolana Povoroznyuk

Abstract:

Osteoporosis and obesity are widespread diseases in people over 50 years associated with changes in structure and body composition. Нigher body mass index (BMI) values are associated with greater bone mineral density (BMD). However, trabecular bone score (TBS) indirectly explores bone quality, independently of BMD. The aim of our study was to evaluate the relationship between the BMD and TBS parameters in Ukrainian men suffering from obesity. We examined 396 men aged 40-89 years. Depending on their BMI all the subjects were divided into two groups: Group I – patients with obesity whose BMI was ≥ 30 kg/m2 (n=129) and Group II – patients without obesity and BMI of < 30 kg/m2 (n=267). The BMD of total body, lumbar spine L1-L4, femoral neck and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on DXA machine (product of Med-Imaps, Pessac, France). In general, obese men had a significantly higher BMD of lumbar spine L1-L4, femoral neck, total body and ultradistal forearm (p < 0.001) in comparison with men without obesity. The TBS of L1-L4 was significantly lower in obese men compared to non-obese ones (p < 0.001). BMD of lumbar spine L1-L4, femoral neck and total body significantly differ in men aged 40-49, 50-59, 60-69, and 80-89 years (p < 0.05). At the same time, in men aged 70-79 years, BMD of lumbar spine L1-L4 (p=0.46), femoral neck (p=0.18), total body (p=0.21), ultra-distal forearm (p=0.13), and TBS (p=0.07) did not significantly differ. A significant positive correlation between the fat mass and the BMD at different sites was observed. However, the correlation between the fat mass and TBS of L1-L4 was also significant, though negative.

Keywords: bone mineral density, trabecular bone score, obesity, men

Procedia PDF Downloads 463
676 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis

Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos

Abstract:

Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.

Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent

Procedia PDF Downloads 279
675 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection

Authors: S. Shankar Bharathi

Abstract:

Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.

Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision

Procedia PDF Downloads 429
674 Big Data in Construction Project Management: The Colombian Northeast Case

Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez

Abstract:

In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.

Keywords: big data, building information modeling, tecnology, project manamegent

Procedia PDF Downloads 129
673 Valence and Arousal-Based Sentiment Analysis: A Comparative Study

Authors: Usama Shahid, Muhammad Zunnurain Hussain

Abstract:

This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.

Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining

Procedia PDF Downloads 102
672 Brief Review of the Self-Tightening, Left-Handed Thread

Authors: Robert S. Giachetti, Emanuele Grossi

Abstract:

Loosening of bolted joints in rotating machines can adversely affect their performance, cause mechanical damage, and lead to injuries. In this paper, two potential loosening phenomena in rotating applications are discussed. First, ‘precession,’ is governed by thread/nut contact forces, while the second is based on inertial effects of the fastened assembly. These mechanisms are reviewed within the context of historical usage of left-handed fasteners in rotating machines which appears absent in the literature and common machine design texts. Historically, to prevent loosening of wheel nuts, vehicle manufacturers have used right-handed and left-handed threads on different sides of the vehicle, but most modern vehicles have abandoned this custom and only use right-handed, tapered lug nuts on all sides of the vehicle. Other classical machines such as the bicycle continue to use different handed threads on each side while other machines such as, bench grinders, circular saws and brush cutters still use left-handed threads to fasten rotating components. Despite the continued use of left-handed fasteners, the rationale and analysis of left-handed threads to mitigate self-loosening of fasteners in rotating applications is not commonly, if at all, discussed in the literature or design textbooks. Without scientific literature to support these design selections, these implementations may be the result of experimental findings or aged institutional knowledge. Based on a review of rotating applications, historical documents and mechanical design references, a formal study of the paradoxical nature of left-handed threads in various applications is merited.

Keywords: rotating machinery, self-loosening fasteners, wheel fastening, vibration loosening

Procedia PDF Downloads 136