Search results for: cointegration techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6588

Search results for: cointegration techniques

5538 Management of Urban Watering: A Study of Appliance of Technologies and Legislation in Goiania, Brazil

Authors: Vinicius Marzall, Jussanã Milograna

Abstract:

The urban drainwatering remains a major challenge for most of the Brazilian cities. Not so different of the most part, Goiania, a state capital located in Midwest of the country has few legislations about the subject matter and only one registered solution of compensative techniques for drainwater. This paper clam to show some solutions which are adopted in other Brazilian cities with consolidated legislation, suggesting technics about detention tanks in a building sit. This study analyzed and compared the legislation of Curitiba, Porto Alegre e Sao Paulo, with the actual legislation and politics of Goiania. After this, were created models with adopted data for dimensioning the size of detention tanks using the envelope curve method considering synthetic series for intense precipitations and building sits between 250 m² and 600 m², with an impermeabilization tax of 50%. The results showed great differences between the legislation of Goiania and the documentation of the others cities analyzed, like the number of techniques for drainwatering applied to the reality of the cities, educational actions to awareness the population about care the water courses and political management by having a specified funds for drainwater subjects, for example. Besides, the use of detention tank showed itself practicable, have seen that the occupation of the tank is minor than 3% of the building sit, whatever the size of the terrain, granting the exit flow to pre-occupational taxes in extreme rainfall events. Also, was developed a linear equation to measure the detention tank based in the size of the building sit in Goiania, making simpler the calculation and implementation for non-specialized people.

Keywords: clean technology, legislation, rainwater management, urban drainwater

Procedia PDF Downloads 140
5537 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks

Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas

Abstract:

This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).

Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems

Procedia PDF Downloads 117
5536 Pricing Strategy in Marketing: Balancing Value and Profitability

Authors: Mohsen Akhlaghi, Tahereh Ebrahimi

Abstract:

Pricing strategy is a vital component in achieving the balance between customer value and business profitability. The aim of this study is to provide insights into the factors, techniques, and approaches involved in pricing decisions. The study utilizes a descriptive approach to discuss various aspects of pricing strategy in marketing, drawing on concepts from market research, consumer psychology, competitive analysis, and adaptability. This approach presents a comprehensive view of pricing decisions. The result of this exploration is a framework that highlights key factors influencing pricing decisions. The study examines how factors such as market positioning, product differentiation, and brand image shape pricing strategies. Additionally, it emphasizes the role of consumer psychology in understanding price elasticity, perceived value, and price-quality associations that influence consumer behavior. Various pricing techniques, including charm pricing, prestige pricing, and bundle pricing, are mentioned as methods to enhance sales by influencing consumer perceptions. The study also underscores the importance of adaptability in responding to market dynamics through regular price monitoring, dynamic pricing, and promotional strategies. It recognizes the role of digital platforms in enabling personalized pricing and dynamic pricing models. In conclusion, the study emphasizes that effective pricing strategies strike a balance between customer value and business profitability, ultimately driving sales, enhancing brand perception, and fostering lasting customer relationships.

Keywords: business, customer benefits, marketing, pricing

Procedia PDF Downloads 60
5535 Valence and Arousal-Based Sentiment Analysis: A Comparative Study

Authors: Usama Shahid, Muhammad Zunnurain Hussain

Abstract:

This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.

Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining

Procedia PDF Downloads 75
5534 Comprehensive Feature Extraction for Optimized Condition Assessment of Fuel Pumps

Authors: Ugochukwu Ejike Akpudo, Jank-Wook Hur

Abstract:

The increasing demand for improved productivity, maintainability, and reliability has prompted rapidly increasing research studies on the emerging condition-based maintenance concept- Prognostics and health management (PHM). Varieties of fuel pumps serve critical functions in several hydraulic systems; hence, their failure can have daunting effects on productivity, safety, etc. The need for condition monitoring and assessment of these pumps cannot be overemphasized, and this has led to the uproar in research studies on standard feature extraction techniques for optimized condition assessment of fuel pumps. By extracting time-based, frequency-based and the more robust time-frequency based features from these vibrational signals, a more comprehensive feature assessment (and selection) can be achieved for a more accurate and reliable condition assessment of these pumps. With the aid of emerging deep classification and regression algorithms like the locally linear embedding (LLE), we propose a method for comprehensive condition assessment of electromagnetic fuel pumps (EMFPs). Results show that the LLE as a comprehensive feature extraction technique yields better feature fusion/dimensionality reduction results for condition assessment of EMFPs against the use of single features. Also, unlike other feature fusion techniques, its capabilities as a fault classification technique were explored, and the results show an acceptable accuracy level using standard performance metrics for evaluation.

Keywords: electromagnetic fuel pumps, comprehensive feature extraction, condition assessment, locally linear embedding, feature fusion

Procedia PDF Downloads 105
5533 Analysis of Ionospheric Variations over Japan during 23rd Solar Cycle Using Wavelet Techniques

Authors: C. S. Seema, P. R. Prince

Abstract:

The characterization of spatio-temporal inhomogeneities occurring in the ionospheric F₂ layer is remarkable since these variations are direct consequences of electrodynamical coupling between magnetosphere and solar events. The temporal and spatial variations of the F₂ layer, which occur with a period of several days or even years, mainly owe to geomagnetic and meteorological activities. The hourly F₂ layer critical frequency (foF2) over 23rd solar cycle (1996-2008) of three ionosonde stations (Wakkanai, Kokunbunji, and Okinawa) in northern hemisphere, which falls within same longitudinal span, is analyzed using continuous wavelet techniques. Morlet wavelet is used to transform continuous time series data of foF2 to a two dimensional time-frequency space, quantifying the time evolution of the oscillatory modes. The presence of significant time patterns (periodicities) at a particular time period and the time location of each periodicity are detected from the two-dimensional representation of the wavelet power, in the plane of scale and period of the time series. The mean strength of each periodicity over the entire period of analysis is studied using global wavelet spectrum. The quasi biennial, annual, semiannual, 27 day, diurnal and 12 hour variations of foF2 are clearly evident in the wavelet power spectra in all the three stations. Critical frequency oscillations with multi-day periods (2-3 days and 9 days in the low latitude station, 6-7 days in all stations and 15 days in mid-high latitude station) are also superimposed over large time scaled variations.

Keywords: continuous wavelet analysis, critical frequency, ionosphere, solar cycle

Procedia PDF Downloads 198
5532 Modular 3D Environmental Development for Augmented Reality

Authors: Kevin William Taylor

Abstract:

This work used industry-standard practices and technologies as a foundation to explore current and future advancements in modularity for 3D environmental production. Covering environmental generation, and AI-assisted generation, this study investigated how these areas will shape the industries goal to achieve full immersion within augmented reality environments. This study will explore modular environmental construction techniques utilized in large scale 3D productions. This will include the reasoning behind this approach to production, the principles in the successful development, potential pitfalls, and different methodologies for successful implementation of practice in commercial and proprietary interactive engines. A focus will be on the role of the 3D artists in the future of environmental development, requiring adaptability to new approaches, as the field evolves in response to tandem technological advancements. Industry findings and projections theorize how these factors will impact the widespread utilization of augmented reality in daily life. This will continue to inform the direction of technology towards expansive interactive environments. It will change the tools and techniques utilized in the development of environments for game, film, and VFX. This study concludes that this technology will be the cornerstone for the creation of AI-driven AR that is able to fully theme our world, change how we see and engage with one another. This will impact the concept of a virtual self-identity that will be as prevalent as real-world identity. While this progression scares or even threaten some, it is safe to say that we are seeing the beginnings of a technological revolution that will surpass the impact that the smartphone had on modern society.

Keywords: virtual reality, augmented reality, training, 3D environments

Procedia PDF Downloads 106
5531 Influence of Surface Preparation Effects on the Electrochemical Behavior of 2098-T351 Al–Cu–Li Alloy

Authors: Rejane Maria P. da Silva, Mariana X. Milagre, João Victor de S. Araujo, Leandro A. de Oliveira, Renato A. Antunes, Isolda Costa

Abstract:

The Al-Cu-Li alloys are advanced materials for aerospace application because of their interesting mechanical properties and low density when compared with conventional Al-alloys. However, Al-Cu-Li alloys are susceptible to localized corrosion. The near-surface deformed layer (NSDL) induced by the rolling process during the production of the alloy and its removal by polishing can influence on the corrosion susceptibility of these alloys. In this work, the influence of surface preparation effects on the electrochemical activity of AA2098-T351 (Al–Cu–Li alloy) was investigated using a correlation between surface chemistry, microstructure, and electrochemical activity. Two conditions were investigated, polished and as-received surfaces of the alloy. The morphology of the two types of surfaces was investigated using confocal laser scanning microscopy (CLSM) and optical microscopy. The surface chemistry was analyzed by X-ray Photoelectron Spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDS). Global electrochemical techniques (potentiodynamic polarization and EIS technique) and a local electrochemical technique (Localized Electrochemical Impedance Spectroscopy-LEIS) were used to examine the electrochemical activity of the surfaces. The results obtained in this study showed that in the as-received surface, the near-surface deformed layer (NSDL), which is composed of Mg-rich bands, influenced the electrochemical behavior of the alloy. The results showed higher electrochemical activity to the polished surface condition compared to the as-received one.

Keywords: Al-Cu-Li alloys, surface preparation effects, electrochemical techniques, localized corrosion

Procedia PDF Downloads 138
5530 Participation in IAEA Proficiency Test to Analyse Cobalt, Strontium and Caesium in Seawater Using Direct Counting and Radiochemical Techniques

Authors: S. Visetpotjanakit, C. Khrautongkieo

Abstract:

Radiation monitoring in the environment and foodstuffs is one of the main responsibilities of Office of Atoms for Peace (OAP) as the nuclear regulatory body of Thailand. The main goal of the OAP is to assure the safety of the Thai people and environment from any radiological incidents. Various radioanalytical methods have been developed to monitor radiation and radionuclides in the environmental and foodstuff samples. To validate our analytical performance, several proficiency test exercises from the International Atomic Energy Agency (IAEA) have been performed. Here, the results of a proficiency test exercise referred to as the Proficiency Test for Tritium, Cobalt, Strontium and Caesium Isotopes in Seawater 2017 (IAEA-RML-2017-01) are presented. All radionuclides excepting ³H were analysed using various radioanalytical methods, i.e. direct gamma-ray counting for determining ⁶⁰Co, ¹³⁴Cs and ¹³⁷Cs and developed radiochemical techniques for analysing ¹³⁴Cs, ¹³⁷Cs using AMP pre-concentration technique and 90Sr using di-(2-ethylhexyl) phosphoric acid (HDEHP) liquid extraction technique. The analysis results were submitted to IAEA. All results passed IAEA criteria, i.e. accuracy, precision and trueness and obtained ‘Accepted’ statuses. These confirm the data quality from the OAP environmental radiation laboratory to monitor radiation in the environment.

Keywords: international atomic energy agency, proficiency test, radiation monitoring, seawater

Procedia PDF Downloads 162
5529 Effects of Auxetic Antibacterial Zwitterion Carboxylate and Sulfate Copolymer Hydrogels for Diabetic Wound Healing Application

Authors: Udayakumar Vee, Franck Quero

Abstract:

Zwitterionic polymers generally have been viewed as a new class of antimicrobial and non-fouling materials. They offer a broad versatility for chemical modification and hence great freedom for accurate molecular design, which bear an equimolar number of homogenously distributed anionic and cationic groups along their polymer chains. This study explores the effectiveness of the auxetic zwitterion carboxylate/sulfonate hydrogel in the diabetic-induced mouse model. A series of silver metal-doped auxetic zwitterion carboxylate/sulfonate/vinylaniline copolymer hydrogels is designed via a 3D printer. Zwitterion monomers have been characterized by FT-IR and NMR techniques. The effect of changing the monomers and different loading ratios of Ag over zwitterion on the final hydrogel materials' antimicrobial properties and biocompatibility will be investigated in detail. The synthesized auxetic hydrogel has been characterized using a wide range of techniques to help establish the relationship between molecular level and macroscopic properties of these materials, including mechanical and antibacterial and biocompatibility and wound healing ability. This work's comparative studies and results provide new insights and guide us in choosing a better auxetic structured material for a broad spectrum of wound healing applications in the animal model. We expect this approach to provide a versatile and robust platform for biomaterial design that could lead to promising treatments for wound healing applications.

Keywords: auxetic, zwitterion, carboxylate, sulfonate, polymer, wound healing

Procedia PDF Downloads 123
5528 A Grey-Box Text Attack Framework Using Explainable AI

Authors: Esther Chiramal, Kelvin Soh Boon Kai

Abstract:

Explainable AI is a strong strategy implemented to understand complex black-box model predictions in a human-interpretable language. It provides the evidence required to execute the use of trustworthy and reliable AI systems. On the other hand, however, it also opens the door to locating possible vulnerabilities in an AI model. Traditional adversarial text attack uses word substitution, data augmentation techniques, and gradient-based attacks on powerful pre-trained Bidirectional Encoder Representations from Transformers (BERT) variants to generate adversarial sentences. These attacks are generally white-box in nature and not practical as they can be easily detected by humans e.g., Changing the word from “Poor” to “Rich”. We proposed a simple yet effective Grey-box cum Black-box approach that does not require the knowledge of the model while using a set of surrogate Transformer/BERT models to perform the attack using Explainable AI techniques. As Transformers are the current state-of-the-art models for almost all Natural Language Processing (NLP) tasks, an attack generated from BERT1 is transferable to BERT2. This transferability is made possible due to the attention mechanism in the transformer that allows the model to capture long-range dependencies in a sequence. Using the power of BERT generalisation via attention, we attempt to exploit how transformers learn by attacking a few surrogate transformer variants which are all based on a different architecture. We demonstrate that this approach is highly effective to generate semantically good sentences by changing as little as one word that is not detectable by humans while still fooling other BERT models.

Keywords: BERT, explainable AI, Grey-box text attack, transformer

Procedia PDF Downloads 121
5527 The Clustering of Multiple Sclerosis Subgroups through L2 Norm Multifractal Denoising Technique

Authors: Yeliz Karaca, Rana Karabudak

Abstract:

Multifractal Denoising techniques are used in the identification of significant attributes by removing the noise of the dataset. Magnetic resonance (MR) image technique is the most sensitive method so as to identify chronic disorders of the nervous system such as Multiple Sclerosis. MRI and Expanded Disability Status Scale (EDSS) data belonging to 120 individuals who have one of the subgroups of MS (Relapsing Remitting MS (RRMS), Secondary Progressive MS (SPMS), Primary Progressive MS (PPMS)) as well as 19 healthy individuals in the control group have been used in this study. The study is comprised of the following stages: (i) L2 Norm Multifractal Denoising technique, one of the multifractal technique, has been used with the application on the MS data (MRI and EDSS). In this way, the new dataset has been obtained. (ii) The new MS dataset obtained from the MS dataset and L2 Multifractal Denoising technique has been applied to the K-Means and Fuzzy C Means clustering algorithms which are among the unsupervised methods. Thus, the clustering performances have been compared. (iii) In the identification of significant attributes in the MS dataset through the Multifractal denoising (L2 Norm) technique using K-Means and FCM algorithms on the MS subgroups and control group of healthy individuals, excellent performance outcome has been yielded. According to the clustering results based on the MS subgroups obtained in the study, successful clustering results have been obtained in the K-Means and FCM algorithms by applying the L2 norm of multifractal denoising technique for the MS dataset. Clustering performance has been more successful with the MS Dataset (L2_Norm MS Data Set) K-Means and FCM in which significant attributes are obtained by applying L2 Norm Denoising technique.

Keywords: clinical decision support, clustering algorithms, multiple sclerosis, multifractal techniques

Procedia PDF Downloads 151
5526 Antibacterial Zwitterion Carboxylate and Sulfonate Copolymer Auxetic Hydrogels for Diabetic Wound Healing Application

Authors: Udayakumar Veerabagu, Franck Quero

Abstract:

Zwitterion carboxylate and sulfonate polymers generally have been viewed as a new class of antimicrobial and non-fouling materials. They offer a broad versatility for chemical modification and hence great freedom for accurate molecular design, which bear an equimolar number of homogenously distributed anionic and cationic groups along their polymer chains. This study explores the effectiveness of the auxetic zwitterion carboxylate/sulfonate hydrogel in the diabetic-induced mouse model. A series of silver metal-doped auxetic zwitterion carboxylate/sulfonate/vinylaniline copolymer hydrogels is designed via a 3D printer. Zwitterion monomers have been characterized by FT-IR and NMR techniques. The effect of changing the monomers and different loading ratios of Ag over zwitterion on the final hydrogel materials' antimicrobial properties and biocompatibility will be investigated in detail. The synthesized auxetic hydrogel has been characterized using a wide range of techniques to help establish the relationship between molecular level and macroscopic properties of these materials, including mechanical and antibacterial and biocompatibility and wound healing ability. This work's comparative studies and results provide new insights and guide us in choosing a better auxetic structured material for a broad spectrum of wound healing applications in the animal model. We expect this approach to provide a versatile and robust platform for biomaterial design that could lead to promising treatments for wound healing applications.

Keywords: auxetic, zwitterion, carboxylate, sulfonate, polymer, wound healing

Procedia PDF Downloads 137
5525 Porcelain Paste Processing by Robocasting 3D: Parameters Tuning

Authors: A. S. V. Carvalho, J. Luis, L. S. O. Pires, J. M. Oliveira

Abstract:

Additive manufacturing technologies (AM) experienced a remarkable growth in the latest years due to the development and diffusion of a wide range of three-dimensional (3D) printing techniques. Nowadays we can find techniques available for non-industrial users, like fused filament fabrication, but techniques like 3D printing, polyjet, selective laser sintering and stereolithography are mainly spread in the industry. Robocasting (R3D) shows a great potential due to its ability to shape materials with a wide range of viscosity. Industrial porcelain compositions showing different rheological behaviour can be prepared and used as candidate materials to be processed by R3D. The use of this AM technique in industry is very residual. In this work, a specific porcelain composition with suitable rheological properties will be processed by R3D, and a systematic study of the printing parameters tuning will be shown. The porcelain composition was formulated based on an industrial spray dried porcelain powder. The powder particle size and morphology was analysed. The powders were mixed with water and an organic binder on a ball mill at 200 rpm/min for 24 hours. The batch viscosity was adjusted by the addition of an acid solution and mixed again. The paste density, viscosity, zeta potential, particle size distribution and pH were determined. In a R3D system, different speed and pressure settings were studied to access their impact on the fabrication of porcelain models. These models were dried at 80 °C, during 24 hours and sintered in air at 1350 °C for 2 hours. The stability of the models, its walls and surface quality were studied and their physical properties were accessed. The microstructure and layer adhesion were observed by SEM. The studied processing parameters have a high impact on the models quality. Moreover, they have a high impact on the stacking of the filaments. The adequate tuning of the parameters has a huge influence on the final properties of the porcelain models. This work contributes to a better assimilation of AM technologies in ceramic industry. Acknowledgments: The RoboCer3D project – project of additive rapid manufacturing through 3D printing ceramic material (POCI-01-0247-FEDER-003350) financed by Compete 2020, PT 2020, European Regional Development Fund – FEDER through the International and Competitive Operational Program (POCI) under the PT2020 partnership agreement.

Keywords: additive manufacturing, porcelain, robocasting, R3D

Procedia PDF Downloads 151
5524 Process Monitoring Based on Parameterless Self-Organizing Map

Authors: Young Jae Choung, Seoung Bum Kim

Abstract:

Statistical Process Control (SPC) is a popular technique for process monitoring. A widely used tool in SPC is a control chart, which is used to detect the abnormal status of a process and maintain the controlled status of the process. Traditional control charts, such as Hotelling’s T2 control chart, are effective techniques to detect abnormal observations and monitor processes. However, many complicated manufacturing systems exhibit nonlinearity because of the different demands of the market. In this case, the unregulated use of a traditional linear modeling approach may not be effective. In reality, many industrial processes contain the nonlinear and time-varying properties because of the fluctuation of process raw materials, slowing shift of the set points, aging of the main process components, seasoning effects, and catalyst deactivation. The use of traditional SPC techniques with time-varying data will degrade the performance of the monitoring scheme. To address these issues, in the present study, we propose a parameterless self-organizing map (PLSOM)-based control chart. The PLSOM-based control chart not only can manage a situation where the distribution or parameter of the target observations changes, but also address the nonlinearity of modern manufacturing systems. The control limits of the proposed PLSOM chart are established by estimating the empirical level of significance on the percentile using a bootstrap method. Experimental results with simulated data and actual process data from a thin-film transistor-liquid crystal display process demonstrated the effectiveness and usefulness of the proposed chart.

Keywords: control chart, parameter-less self-organizing map, self-organizing map, time-varying property

Procedia PDF Downloads 256
5523 Pellegrini-Stieda Syndrome: A Physical Medicine and Rehabilitation Approach

Authors: Pedro Ferraz-Gameiro

Abstract:

Introduction: The Pellegrini-Stieda lesion is the result of post-traumatic calcification and/or ossification on the medial collateral ligament (MCL) of the knee. When this calcification is accompanied by gonalgia and limitation of knee flexion, it is called Pellegrini-Stieda syndrome. The pathogenesis is probably the calcification of a post-traumatic hematoma at least three weeks after the initial trauma or secondary to repetitive microtrauma. On anteroposterior radiographs, a Pellegrini-Stieda lesion is a linear vertical ossification or calcification of the proximal portion of the MCL and usually near the medial femoral condyle. Patients with Pellegrini-Stieda syndrome present knee pain associated with loss of range of motion. The treatment is usually conservative with analgesic and anti-inflammatory drugs, either systemic or intra-articular. Physical medicine and rehabilitation techniques associated with shock wave therapy can be a way of reduction of pain/inflammation. Patients who maintain instability with significant limitation of knee mobility may require surgical excision. Methods: Research was done using PubMed central using the terms Pellegrini-Stieda syndrome. Discussion/conclusion: Medical treatment is the rule, with initial rest, anti-inflammatory, and physiotherapy. If left untreated, this ossification can potentially form a significant bone mass, which can compromise the range of motion of the knee. Physical medicine and rehabilitation techniques associated with shock wave therapy are a way of reduction of pain/inflammation.

Keywords: knee, Pellegrini-Stieda syndrome, rehabilitation, shock waves therapy

Procedia PDF Downloads 118
5522 The Application and Relevance of Costing Techniques in Service-Oriented Business Organizations a Review of the Activity-Based Costing (ABC) Technique

Authors: Udeh Nneka Evelyn

Abstract:

The shortcoming of traditional costing system in terms of validity, accuracy, consistency, and Relevance increased the need for modern management accounting system. Activity –Based Costing (ABC) can be used as a modern tool for planning, Control and decision making for management. Past studies on ABC system have focused on manufacturing firms thereby making the studies on service firms scanty to some extent. This paper reviewed the application and relevance of activity-based costing technique in service oriented business organizations by employing a qualitative research method which relied heavily on literature review of past and current relevant articles focusing on ABC. Findings suggest that ABC is not only appropriate for use in a manufacturing environment; it is also most appropriate for service organizations such as financial institutions, the healthcare industry and government organization. In fact, some banking and financial institutions have been applying the concept for years under other names. One of them is unit costing, which is used to calculate the cost of banking services by determining the cost and consumption of each unit of output of functions required to deliver the service. ABC in very basic terms may provide very good payback for businesses. Some of the benefits that relate directly to the financial services industry are: identification the most profitable customers: more accurate product and service pricing: increase product profitability: Well organized process costs.

Keywords: business, costing, organizations, planning, techniques

Procedia PDF Downloads 227
5521 Synthesis, Structural, Spectroscopic and Nonlinear Optical Properties of New Picolinate Complex of Manganese (II) Ion

Authors: Ömer Tamer, Davut Avcı, Yusuf Atalay

Abstract:

Novel picolinate complex of manganese(II) ion, [Mn(pic)2] [pic: picolinate or 2-pyridinecarboxylate], was prepared and fully characterized by single crystal X-ray structure determination. The manganese(II) complex was characterized by FT-IR, FT-Raman and UV–Vis spectroscopic techniques. The C=O, C=N and C=C stretching vibrations were found to be strong and simultaneously active in IR and spectra. In order to support these experimental techniques, density functional theory (DFT) calculations were performed at Gaussian 09W. Although the supramolecular interactions have some influences on the molecular geometry in solid state phase, the calculated data show that the predicted geometries can reproduce the structural parameters. The molecular modeling and calculations of IR, Raman and UV-vis spectra were performed by using DFT levels. Nonlinear optical (NLO) properties of synthesized complex were evaluated by the determining of dipole moment (µ), polarizability (α) and hyperpolarizability (β). Obtained results demonstrated that the manganese(II) complex is a good candidate for NLO material. Stability of the molecule arising from hyperconjugative interactions and charge delocalization was analyzed using natural bond orbital (NBO) analysis. The highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO) which is also known the frontier molecular orbitals were simulated, and obtained energy gap confirmed that charge transfer occurs within manganese(II) complex. Molecular electrostatic potential (MEP) for synthesized manganese(II) complex displays the electrophilic and nucleophilic regions. From MEP, the the most negative region is located over carboxyl O atoms while positive region is located over H atoms.

Keywords: DFT, picolinate, IR, Raman, nonlinear optic

Procedia PDF Downloads 480
5520 Scientific and Technical Basis for the Application of Textile Structures in Glass Using Pate De Verre Technique

Authors: Walaa Hamed Mohamed Hamza

Abstract:

Textile structures are the way in which the threading process of both thread and loom is done together to form the woven. Different methods of attaching the clothing and the flesh produce different textile structures, which differ in their surface appearance from each other, including so-called simple textile structures. Textile compositions are the basis of woven fabric, through which aesthetic values can be achieved in the textile industry by weaving threads of yarn with the weft at varying degrees that may reach the total control of one of the two groups on the other. Hence the idea of how art and design can be used using different textile structures under the modern techniques of pate de verre. In the creation of designs suitable for glass products employed in the interior architecture. The problem of research: The textile structures, in general, have a significant impact on the appearance of the fabrics in terms of form and aesthetic. How can we benefit from the characteristics of different textile compositions in different glass designs with different artistic values. The research achieves its goal by the investment of simple textile structures in innovative artistic designs using the pate de verre technique, as well as the use of designs resulting from the textile structures in the external architecture to add various aesthetic values. The importance of research in the revival of heritage using ancient techniques, as well as synergy between different fields of applied arts such as glass and textile, and also study the different and diverse effects resulting from each fabric composition and the possibility of use in various designs in the interior architecture. The research will be achieved that by investing in simple textile compositions, innovative artistic designs produced using pate de verre technology can be used in interior architecture.

Keywords: glass, interior architecture, pate de verre, textile structures

Procedia PDF Downloads 274
5519 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 371
5518 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 20
5517 Effects of Waist-to-Hip Ratio and Visceral Fat Measurements Improvement on Offshore Petrochemical Company Shift Employees' Work Efficiency

Authors: Essam Amerian

Abstract:

The aim of this study was to investigate the effects of improving waist-to-hip ratio (WHR) and visceral fat components on the health of shift workers in an offshore petrochemical company. A total of 100 male shift workers participated in the study, with an average age of 40.5 years and an average BMI of 28.2 kg/m². The study employed a randomized controlled trial design, with participants assigned to either an intervention group or a control group. The intervention group received a 12-week program that included dietary counseling, physical activity recommendations, and stress management techniques. The control group received no intervention. The outcomes measured were changes in WHR, visceral fat components, blood pressure, and lipid profile. The results showed that the intervention group had a statistically significant improvement in WHR (p<0.001) and visceral fat components (p<0.001) compared to the control group. Furthermore, there were statistically significant improvements in systolic blood pressure (p=0.015) and total cholesterol (p=0.034) in the intervention group compared to the control group. These findings suggest that implementing a 12-week program that includes dietary counseling, physical activity recommendations, and stress management techniques can effectively improve WHR, visceral fat components, and cardiovascular health among shift workers in an offshore petrochemical company.

Keywords: body composition, waist-hip-ratio, visceral fat, shift worker, work efficiency

Procedia PDF Downloads 60
5516 Different Processing Methods to Obtain a Carbon Composite Element for Cycling

Authors: Maria Fonseca, Ana Branco, Joao Graca, Rui Mendes, Pedro Mimoso

Abstract:

The present work is focused on the production of a carbon composite element for cycling through different techniques, namely, blow-molding and high-pressure resin transfer injection (HP-RTM). The main objective of this work is to compare both processes to produce carbon composite elements for the cycling industry. It is well known that the carbon composite components for cycling are produced mainly through blow-molding; however, this technique depends strongly on manual labour, resulting in a time-consuming production process. Comparatively, HP-RTM offers a more automated process which should lead to higher production rates. Nevertheless, a comparison of the elements produced through both techniques must be done, in order to assess if the final products comply with the required standards of the industry. The main difference between said techniques lies in the used material. Blow-moulding uses carbon prepreg (carbon fibres pre-impregnated with a resin system), and the material is laid up by hand, piece by piece, on a mould or on a hard male. After that, the material is cured at a high temperature. On the other hand, in the HP-RTM technique, dry carbon fibres are placed on a mould, and then resin is injected at high pressure. After some research regarding the best material systems (prepregs and braids) and suppliers, an element was designed (similar to a handlebar) to be constructed. The next step was to perform FEM simulations in order to determine what the best layup of the composite material was. The simulations were done for the prepreg material, and the obtained layup was transposed to the braids. The selected material was a prepreg with T700 carbon fibre (24K) and an epoxy resin system, for the blow-molding technique. For HP-RTM, carbon fibre elastic UD tubes and ± 45º braids were used, with both 3K and 6K filaments per tow, and the resin system was an epoxy as well. After the simulations for the prepreg material, the optimized layup was: [45°, -45°,45°, -45°,0°,0°]. For HP-RTM, the transposed layup was [ ± 45° (6k); 0° (6k); partial ± 45° (6k); partial ± 45° (6k); ± 45° (3k); ± 45° (3k)]. The mechanical tests showed that both elements can withstand the maximum load (in this case, 1000 N); however, the one produced through blow-molding can support higher loads (≈1300N against 1100N from HP-RTM). In what concerns to the fibre volume fraction (FVF), the HP-RTM element has a slightly higher value ( > 61% compared to 59% of the blow-molding technique). The optical microscopy has shown that both elements have a low void content. In conclusion, the elements produced using HP-RTM can compare to the ones produced through blow-molding, both in mechanical testing and in the visual aspect. Nevertheless, there is still space for improvement in the HP-RTM elements since the layup of the braids, and UD tubes could be optimized.

Keywords: HP-RTM, carbon composites, cycling, FEM

Procedia PDF Downloads 116
5515 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 117
5514 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 117
5513 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 92
5512 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases

Authors: Mohammad A. Bani-Khaled

Abstract:

In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.

Keywords: coupled dynamics, geometric complexity, proper orthogonal decomposition (POD), thin walled beams

Procedia PDF Downloads 406
5511 Effect of Post Circuit Resistance Exercise Glucose Feeding on Energy and Hormonal Indexes in Plasma and Lymphocyte in Free-Style Wrestlers

Authors: Miesam Golzadeh Gangraj, Younes Parvasi, Mohammad Ghasemi, Ahmad Abdi, Saeid Fazelifar

Abstract:

The purpose of the study was to determine the effect of glucose feeding on energy and hormonal indexes in plasma and lymphocyte immediately after wrestling – base techniques circuit exercise (WBTCE) in young male freestyle wrestlers. Sixteen wrestlers (weight = 75/45 ± 12/92 kg, age = 22/29 ± 0/90 years, BMI = 26/23 ± 2/64 kg/m²) were randomly divided into two groups: control (water), glucose (2 gr per kg body weight). Blood samples were obtained before, immediately, and 90 minutes of the post-exercise recovery period. Glucose (2 g/kg of body weight, 1W/5V) and water (equal volumes) solutions were given immediately after the second blood sampling. Data were analyzed by using an ANOVA (a repeated measure) and a suitable post hoc test (LSD). A significant decrease was observed in lymphocytes glycogen immediately after exercise (P < 0.001). In the experimental group, increase Lymphocyte glycogen concentration (P < 0.028) than in the control group in 90 min post-exercise. Plasma glucose concentrations increased in all groups immediately after exercise (P < 0.05). Plasma insulin concentrations in both groups decreased immediately after exercise, but at 90 min after exercise, its level was significantly increased only in glucose group (P < 0.001). Our results suggested that WBTCE protocol could be affected cellular energy sources and hormonal response. Furthermore, Glucose consumption can increase the lymphocyte glycogen and better energy within the cell.

Keywords: glucose feeding, lymphocyte, Wrestling – base techniques circuit , exercise

Procedia PDF Downloads 255
5510 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 122
5509 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 199