Search results for: method of choice of the meters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20145

Search results for: method of choice of the meters

15075 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery

Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović

Abstract:

Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.

Keywords: capsaicin, in vitro, patch, RP-LC, transdermal

Procedia PDF Downloads 213
15074 Investigations into Effect of Neural Network Predictive Control of UPFC for Improving Transient Stability Performance of Multimachine Power System

Authors: Sheela Tiwari, R. Naresh, R. Jha

Abstract:

The paper presents an investigation into the effect of neural network predictive control of UPFC on the transient stability performance of a multi-machine power system. The proposed controller consists of a neural network model of the test system. This model is used to predict the future control inputs using the damped Gauss-Newton method which employs ‘backtracking’ as the line search method for step selection. The benchmark 2 area, 4 machine system that mimics the behavior of large power systems is taken as the test system for the study and is subjected to three phase short circuit faults at different locations over a wide range of operating conditions. The simulation results clearly establish the robustness of the proposed controller to the fault location, an increase in the critical clearing time for the circuit breakers and an improved damping of the power oscillations as compared to the conventional PI controller.

Keywords: identification, neural networks, predictive control, transient stability, UPFC

Procedia PDF Downloads 361
15073 Statistical Optimization of Vanillin Production by Pycnoporus Cinnabarinus 1181

Authors: Swarali Hingse, Shraddha Digole, Uday Annapure

Abstract:

The present study investigates the biotransformation of ferulic acid to vanillin by Pycnoporus cinnabarinus and its optimization using one-factor-at-a-time method as well as statistical approach. Effect of various physicochemical parameters and medium components was studied using one-factor-at-a-time method. Screening of the significant factors was carried out using L25 Taguchi orthogonal array and then these selected significant factors were further optimized using response surface methodology (RSM). Significant media components obtained using Taguchi L25 orthogonal array were glucose, KH2PO4 and yeast extract. Further, a Box Behnken design was used to investigate the interactive effects of the three most significant media components. The final medium obtained after optimization using RSM containing glucose (34.89 g/L), diammonium tartrate (1 g/L), yeast extract (1.47 g/L), MgSO4•7H2O (0.5 g/L), KH2PO4 (0.15 g/L), and CaCl2•2H2O (20 mg/L) resulted in amplification of vanillin production from 30.88 mg/L to 187.63 mg/L.

Keywords: ferulic acid, pycnoporus cinnabarinus, response surface methodology, vanillin

Procedia PDF Downloads 367
15072 Numerical Investigation on Tsunami Suppression by Submerged Breakwater

Authors: Tasuku Hongo, Hiroya Mamori, Naoya Fukushima, Makoto Yamamoto

Abstract:

A tsunami induced by an earthquake gives a severe disaster in coastal area. As well known, the huge earthquake in Japan 2011 induced a huge tsunami and the tsunami caused serious damage in the Tohoku and Kanto area. Although breakwaters were constructed in the coast to suppress the tsunami, these were collapsed, and it resulted in severe disasters. In order to decrease the tsunami disaster, we propose the submerged breakwaters and investigate its effect on the tsunami behavior by means of numerical simulations. In order to reproduce tsunami and capture its interface, we employed a moving particle method which is one of the Lagragian methods. Different from ordinary breakwaters, the present breakwater is located in the under-sea. An effective installation condition is investigated by the parametric study. The results show that the submerged breakwater can decrease the wave force by the tsunami. Moreover, the combination of two submerged breakwaters can reduce the tsunami safely and effectively. Therefore, the present results give the effective condition of the installation of the under-sea breakwaters and its mechanism.

Keywords: coastal area, tsunami force reduction, MPS method, submerged breakwater

Procedia PDF Downloads 152
15071 Effect of the Initial Billet Shape Parameters on the Final Product in a Backward Extrusion Process for Pressure Vessels

Authors: Archana Thangavelu, Han-Ik Park, Young-Chul Park, Joon-Hong Park

Abstract:

In this numerical study, we have proposed a method for evaluation of backward extrusion process of pressure vessel made up of steel. Demand for lighter and stiffer products have been increasing in the last years especially in automobile engineering. Through detailed finite element analysis, effective stress, strain and velocity profile have been obtained with optimal range. The process design of a forward and backward extrusion axe-symmetric part has been studied. Forging is mainly carried out because forged products are highly reliable and possess superior mechanical properties when compared to normal products. Performing computational simulations of 3D hot forging with various dimensions of billet and optimization of weight is carried out using Taguchi Orthogonal Array (OA) Optimization technique. The technique used in this study can be used for newly developed materials to investigate its forgeability for much complicated shapes in closed hot die forging process.

Keywords: backward extrusion, hot forging, optimization, finite element analysis, Taguchi method

Procedia PDF Downloads 296
15070 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 62
15069 Low Density Parity Check Codes

Authors: Kassoul Ilyes

Abstract:

The field of error correcting codes has been revolutionized by the introduction of iteratively decoded codes. Among these, LDPC codes are now a preferred solution thanks to their remarkable performance and low complexity. The binary version of LDPC codes showed even better performance, although it’s decoding introduced greater complexity. This thesis studies the performance of binary LDPC codes using simplified weighted decisions. Information is transported between a transmitter and a receiver by digital transmission systems, either by propagating over a radio channel or also by using a transmission medium such as the transmission line. The purpose of the transmission system is then to carry the information from the transmitter to the receiver as reliably as possible. These codes have not generated enough interest within the coding theory community. This forgetfulness will last until the introduction of Turbo-codes and the iterative principle. Then it was proposed to adopt Pearl's Belief Propagation (BP) algorithm for decoding these codes. Subsequently, Luby introduced irregular LDPC codes characterized by a parity check matrix. And finally, we study simplifications on binary LDPC codes. Thus, we propose a method to make the exact calculation of the APP simpler. This method leads to simplifying the implementation of the system.

Keywords: LDPC, parity check matrix, 5G, BER, SNR

Procedia PDF Downloads 141
15068 Technico-Economical Study of a Rapeseed Based Biorefinery Using High Voltage Electrical Discharges and Ultrasounds as Pretreatment Technologies

Authors: Marwa Brahim, Nicolas Brosse, Nadia Boussetta, Nabil Grimi, Eugene Vorobiev

Abstract:

Rapeseed plant is an established product in France which is mainly dedicated to oil production. However, the economic potential of residues from this industry (rapeseed hulls, rapeseed cake, rapeseed straw etc.), has not been fully exploited. Currently, only low-grade applications are found in the market. As a consequence, it was deemed of interest to develop a technological platform aiming to convert rapeseed residues into value- added products. Specifically, a focus is given on the conversion of rapeseed straw into valuable molecules (e.g. lignin, glucose). Existing pretreatment technologies have many drawbacks mainly the production of sugar degradation products that limit the effectiveness of saccharification and fermentation steps in the overall scheme of the lignocellulosic biorefinery. In addition, the viability of fractionation strategies is a challenge in an environmental context increasingly standardized. Hence, the need to find cleaner alternatives with comparable efficiency by implementing physical phenomena that could destabilize the structural integrity of biomass without necessarily using chemical solvents. To meet environmental standards increasingly stringent, the present work aims to study the new pretreatment strategies involving lower consumption of chemicals with an attenuation of the severity of the treatment. These strategies consist on coupling physical treatments either high voltage electrical discharges or ultrasounds to conventional chemical pretreatments (soda and organosolv). Ultrasounds treatment is based on the cavitation phenomenon, and high voltage electrical discharges cause an electrical breakdown accompanied by many secondary phenomena. The choice of process was based on a technological feasibility study taking into account the economic profitability of the whole chain after products valorization. Priority was given to sugars valorization into bioethanol and lignin sale.

Keywords: high voltage electrical discharges, organosolv, pretreatment strategies, rapeseed straw, soda, ultrasounds

Procedia PDF Downloads 349
15067 The Effect of Implant Design on the Height of Inter-Implant Bone Crest: A 10-Year Retrospective Study of the Astra Tech Implant and Branemark Implant

Authors: Daeung Jung

Abstract:

Background: In case of patients with missing teeth, multiple implant restoration has been widely used and is inevitable. To increase its survival rate, it is important to understand the influence of different implant designs on inter-implant crestal bone resorption. There are several implant systems designed to minimize loss of crestal bone, and the Astra Tech and Brånemark Implant are two of them. Aim/Hypothesis: The aim of this 10-year study was to compare the height of inter-implant bone crest in two implant systems; the Astra Tech and the Brånemark implant system. Material and Methods: In this retrospective study, 40 consecutively treated patients were utilized; 23 patients with 30 sites for Astra Tech system and 17 patients with 20 sites for Brånemark system. The implant restoration was comprised of splinted crown in partially edentulous patients. Radiographs were taken immediately after 1st surgery, at impression making, at prosthetics setting, and annually after loading. Lateral distance from implant to bone crest, inter-implant distance was gauged, and crestal bone height was measured from the implant shoulder to the first bone contact. Calibrations were performed with known length of thread pitch distance for vertical measurement, and known diameter of abutment or fixture for horizontal measurement using ImageJ. Results: After 10 years, patients treated with Astra Tech implant system demonstrated less inter-implant crestal bone resorption when implants had a distance of 3mm or less between them. In cases of implants that had a greater than 3 mm distance between them, however, there appeared to be no statistically significant difference in crestal bone loss between two systems. Conclusion and clinical implications: In the situation of partially edentulous patients planning to have more than two implants, the inter-implant distance is one of the most important factors to be considered. If it is impossible to make sure of having sufficient inter-implant distance, the implants with less micro gap in the fixture-abutment junction, less traumatic 2nd surgery approach, and the adequate surface topography would be choice of appropriate options to minimize inter-implant crestal bone resorption.

Keywords: implant design, crestal bone loss, inter-implant distance, 10-year retrospective study

Procedia PDF Downloads 145
15066 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints

Authors: Zongjie Wang, Zhizhong Guo

Abstract:

While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.

Keywords: optimal power flow, time period, security, economy

Procedia PDF Downloads 435
15065 Generating High-Frequency Risk Factor Collections with Transformer

Authors: Wenyan Xu, Rundong Wang, Chen Li, Yonghong Hu, Zhonghua Lu

Abstract:

In the field of quantitative trading, it is common to find patterns in short-term volatile trends of the market. These patterns are known as High-Frequency (HF) risk factors, serving as effective indicators of future stock price volatility. However, in the past, these risk factors were usually generated by traditional financial models, and the validity of these risk factors is heavily based on domain-specific knowledge manually added instead of extensive market data. Inspired by symbolic regression (SR), the task of inferring mathematical laws from existing data, we take the extraction of formulaic risk factors from high-frequency trading (HFT) market data as an SR task. In this paper, we challenge the procedure of manually constructing risk factors and propose an end-to-end methodology, Intraday Risk Factor Transformer (IRFT) to directly predict the full formulaic factors, constants included. Specifically, we utilize a hybrid symbolic-numeric vocabulary where symbolic tokens denote operators/stock features and numeric tokens denote constants. Then, we train a Transformer model on the HFT dataset to directly generate complete formulaic HF risk factors without relying on the skeleton, which is a parametric function using a pre-defined list of operators – typically, the math operations (+, ×, /) and functions(√x, log x, cos x). It determines the general shape of the stock volatility law up to a choice of constants, e.g., f(x) = tan(ax+b) (x is the stock price). We further refine predicted constants(a,b) using the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) as informed guesses to mitigate non-linear issues. Compared to the 10 approaches in SRBench, which is a living benchmark for SR, IRFT gains a 30% excess investment return on the HS300 and S&P500 datasets, with inference times orders of magnitude faster than theirs in HF risk factor mining tasks.

Keywords: transformer, factor-mining language model, highfrequency risk factor collections

Procedia PDF Downloads 12
15064 Numerical Investigation on the Interior Wind Noise of a Passenger Car

Authors: Liu Ying-jie, Lu Wen-bo, Peng Cheng-jian

Abstract:

With the development of the automotive technology and electric vehicle, the contribution of the wind noise on the interior noise becomes the main source of noise. The main transfer path which the exterior excitation is transmitted through is the greenhouse panels and side windows. Simulating the wind noise transmitted into the vehicle accurately in the early development stage can be very challenging. The basic methodologies of this study were based on the Lighthill analogy; the exterior flow field around a passenger car was computed using unsteady Computational Fluid Dynamics (CFD) firstly and then a Finite Element Method (FEM) was used to compute the interior acoustic response. The major findings of this study include: 1) The Sound Pressure Level (SPL) response at driver’s ear locations is mainly induced by the turbulence pressure fluctuation; 2) Peaks were found over the full frequency range. It is found that the methodology used in this study could predict the interior wind noise induced by the exterior aerodynamic excitation in industry.

Keywords: wind noise, computational fluid dynamics, finite element method, passenger car

Procedia PDF Downloads 153
15063 Numerical Analysis of Fire Performance of Timber Structures

Authors: Van Diem Thi, Mourad Khelifa, Mohammed El Ganaoui, Yann Rogaume

Abstract:

An efficient numerical method has been developed to incorporate the effects of heat transfer in timber panels on partition walls exposed to real building fires. The procedure has been added to the software package Abaqus/Standard as a user-defined subroutine (UMATHT) and has been verified using both time-and spatially dependent heat fluxes in two- and three-dimensional problems. The aim is to contribute to the development of simulation tools needed to assist structural engineers and fire testing laboratories in technical assessment exercises. The presented method can also be used under the developmental stages of building components to optimize performance in real fire conditions. The accuracy of the used thermal properties and the finite element models was validated by comparing the predicted results with three different available fire tests in literature. It was found that the model calibrated to results from standard fire conditions provided reasonable predictions of temperatures within assemblies exposed to real building fire.

Keywords: Timber panels, heat transfer, thermal properties, standard fire tests

Procedia PDF Downloads 323
15062 Pneumoperitoneum Creation Assisted with Optical Coherence Tomography and Automatic Identification

Authors: Eric Yi-Hsiu Huang, Meng-Chun Kao, Wen-Chuan Kuo

Abstract:

For every laparoscopic surgery, a safe pneumoperitoneumcreation (gaining access to the peritoneal cavity) is the first and essential step. However, closed pneumoperitoneum is usually obtained by blind insertion of a Veress needle into the peritoneal cavity, which may carry potential risks suchas bowel and vascular injury.Until now, there remains no definite measure to visually confirm the position of the needle tip inside the peritoneal cavity. Therefore, this study established an image-guided Veress needle method by combining a fiber probe with optical coherence tomography (OCT). An algorithm was also proposed for determining the exact location of the needle tip through the acquisition of OCT images. Our method not only generates a series of “live” two-dimensional (2D) images during the needle puncture toward the peritoneal cavity but also can eliminate operator variation in image judgment, thus improving peritoneal access safety. This study was approved by the Ethics Committee of Taipei Veterans General Hospital (Taipei VGH IACUC 2020-144). A total of 2400 in vivo OCT images, independent of each other, were acquired from experiments of forty peritoneal punctures on two piglets. Characteristic OCT image patterns could be observed during the puncturing process. The ROC curve demonstrates the discrimination capability of these quantitative image features of the classifier, showing the accuracy of the classifier for determining the inside vs. outside of the peritoneal was 98% (AUC=0.98). In summary, the present study demonstrates the ability of the combination of our proposed automatic identification method and OCT imaging for automatically and objectively identifying the location of the needle tip. OCT images translate the blind closed technique of peritoneal access into a visualized procedure, thus improving peritoneal access safety.

Keywords: pneumoperitoneum, optical coherence tomography, automatic identification, veress needle

Procedia PDF Downloads 114
15061 Analysis of Thermal Effect on Functionally Graded Micro-Beam via Mixed Finite Element Method

Authors: Cagri Mollamahmutoglu, Ali Mercan, Aykut Levent

Abstract:

Studies concerning the microstructures are becoming more important as the utilization of various micro-electro mechanical systems (MEMS) are increasing. Thus in recent years, thermal buckling and vibration analysis of microstructures have been subject to many investigations that are utilizing different numerical methods. In this study, thermal effects on mechanical response of a functionally graded (FG) Timoshenko micro-beam are presented in the framework of a mixed finite element formulation. Size effects are taken into consideration via modified couple stress theory. The mixed formulation is based on a function which in turn is derived via Gateaux Differential scientifically. After the resolution of all field equations of the beam, a potential operator is carefully constructed. Then this operator is used for the manufacturing of the functional. Usual procedures of finite element approximation are utilized for the derivation of the mixed finite element equations once the potential is obtained. Resulting finite element formulation allows usage of C₀ type simple linear shape functions and avoids shear-locking phenomena, which is a common shortcoming of the displacement-based formulations of moderately thick beams. The developed numerical scheme is used to obtain the effects of thermal loads on the static bending, free vibration and buckling of FG Timoshenko micro-beams for different power-law parameters, aspect ratios and boundary conditions. The versatility of the mixed formulation is presented over other numerical methods such as generalized differential quadrature method (GDQM). Another attractive property of the formulation is that it allows direct calculation of the contribution of micro effects on the overall mechanical response.

Keywords: micro-beam, functionally graded materials, thermal effect, mixed finite element method

Procedia PDF Downloads 120
15060 All Solution-Processed Organic Light Emitting Diode with Low Melting Point Alloy Encapsulation

Authors: Geon Bae, Cheol Hee Moon

Abstract:

Organic Light Emitting Diodes (OLEDs) are being developed rapidly as next-generation displays due to their self-luminous and flexible characteristics. OLEDs are highly susceptible to moisture and oxygen due to their structural properties. Thus, requiring a high level of encapsulation technology. Recently, encapsulation technology such as Thin Film Encapsulation (TFE) has been developed for OLED, but it is not perfect to prevent moisture permeation on the side. In this study, we propose OLED encapsulation method using Low melting Point Alloy (LMPA). The LMPA line was designed in square box shape on the outer edge of the device and was formed by screen printing method. To determine if LMPA has an effect on OLED, we fabricated solution processed OLEDs with a square-shaped LMPA line and evaluate the I-V-L characteristics of the OLEDs. Also, the resistance characteristic of the LMPA line was observed by repeatedly bending the LMPA line. It is expected that LMPA encapsulation will have a great advantage in shortening the process time and cost reduction.

Keywords: OLED, encapsulation, LMPA, solution process

Procedia PDF Downloads 234
15059 Life Cycle Analysis of the Antibacterial Gel Product Using Iso 14040 and Recipe 2016 Method

Authors: Pablo Andres Flores Siguenza, Noe Rodrigo Guaman Guachichullca

Abstract:

Sustainable practices have received increasing attention from academics and companies in recent decades due to, among many factors, the market advantages they generate, global commitments, and policies aimed at reducing greenhouse gas emissions, addressing resource scarcity, and rethinking waste management. The search for ways to promote sustainability leads industries to abandon classical methods and resort to the use of innovative strategies, which in turn are based on quantitative analysis methods and tools such as life cycle analysis (LCA), which is the basis for sustainable production and consumption, since it is a method that analyzes objectively, methodically, systematically, and scientifically the environmental impact caused by a process/product during its entire life cycle. The objective of this study is to develop an LCA of the antibacterial gel product throughout its entire supply chain (SC) under the methodology of ISO 14044 with the help of Gabi software and the Recipe 2016 method. The selection of the case study product was made based on its relevance in the current context of the COVID-19 pandemic and its exponential increase in production. For the development of the LCA, data from a Mexican company are used, and 3 scenarios are defined to obtain the midpoint and endpoint environmental impacts both by phases and globally. As part of the results, the most outstanding environmental impact categories are climate change, fossil fuel depletion, and terrestrial ecotoxicity, and the stage that generates the most pollution in the entire SC is the extraction of raw materials. The study serves as a basis for the development of different sustainability strategies, demonstrates the usefulness of an LCA, and agrees with different authors on the role and importance of this methodology in sustainable development.

Keywords: sustainability, sustainable development, life cycle analysis, environmental impact, antibacterial gel

Procedia PDF Downloads 34
15058 Shock and Particle Velocity Determination from Microwave Interrogation

Authors: Benoit Rougier, Alexandre Lefrancois, Herve Aubert

Abstract:

Microwave interrogation in the range 10-100 GHz is identified as an advanced technique to investigate simultaneously shock and particle velocity measurements. However, it requires the understanding of electromagnetic wave propagation in a multi-layered moving media. The existing models limit their approach to wave guides or evaluate the velocities with a fitting method, restricting therefore the domain of validity and the precision of the results. Moreover, few data of permittivity on high explosives at these frequencies under dynamic compression have been reported. In this paper, shock and particle velocities are computed concurrently for steady and unsteady shocks for various inert and reactive materials, via a propagation model based on Doppler shifts and signal amplitude. Refractive index of the material under compression is also calculated. From experimental data processing, it is demonstrated that Hugoniot curve can be evaluated. The comparison with published results proves the accuracy of the proposed method. This microwave interrogation technique seems promising for shock and detonation waves studies.

Keywords: electromagnetic propagation, experimental setup, Hugoniot measurement, shock propagation

Procedia PDF Downloads 202
15057 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 328
15056 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 76
15055 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data

Authors: Fanqiang Kong, Chending Bian

Abstract:

Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.

Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means

Procedia PDF Downloads 233
15054 A Pedagogical Case Study on Consumer Decision Making Models: A Selection of Smart Phone Apps

Authors: Yong Bum Shin

Abstract:

This case focuses on Weighted additive difference, Conjunctive, Disjunctive, and Elimination by aspects methodologies in consumer decision-making models and the Simple additive weighting (SAW) approach in the multi-criteria decision-making (MCDM) area. Most decision-making models illustrate that the rank reversal phenomenon is unpreventable. This paper presents that rank reversal occurs in popular managerial methods such as Weighted Additive Difference (WAD), Conjunctive Method, Disjunctive Method, Elimination by Aspects (EBA) and MCDM methods as well as such as the Simple Additive Weighting (SAW) and finally Unified Commensurate Multiple (UCM) models which successfully addresses these rank reversal problems in most popular MCDM methods in decision-making area.

Keywords: multiple criteria decision making, rank inconsistency, unified commensurate multiple, analytic hierarchy process

Procedia PDF Downloads 68
15053 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy

Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang

Abstract:

The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.

Keywords: cross-validation support vector machine, refined com- posite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device

Procedia PDF Downloads 119
15052 Application of the Concept of Comonotonicity in Option Pricing

Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke

Abstract:

Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.

Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing

Procedia PDF Downloads 499
15051 Application of Additive Manufacturing for Production of Optimum Topologies

Authors: Mahdi Mottahedi, Peter Zahn, Armin Lechler, Alexander Verl

Abstract:

Optimal topology of components leads to the maximum stiffness with the minimum material use. For the generation of these topologies, normally algorithms are employed, which tackle manufacturing limitations, at the cost of the optimal result. The global optimum result with penalty factor one, however, cannot be fabricated with conventional methods. In this article, an additive manufacturing method is introduced, in order to enable the production of global topology optimization results. For a benchmark, topology optimization with higher and lower penalty factors are performed. Different algorithms are employed in order to interpret the results of topology optimization with lower factors in many microstructure layers. These layers are then joined to form the final geometry. The algorithms’ benefits are then compared experimentally and numerically for the best interpretation. The findings demonstrate that by implementation of the selected algorithm, the stiffness of the components produced with this method is higher than what could have been produced by conventional techniques.

Keywords: topology optimization, additive manufacturing, 3D-printer, laminated object manufacturing

Procedia PDF Downloads 326
15050 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm

Authors: Suparman Suparman

Abstract:

A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.

Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)

Procedia PDF Downloads 343
15049 Extension of the Simplified Theory of Plastic Zones for Analyzing Elastic Shakedown in a Multi-Dimensional Load Domain

Authors: Bastian Vollrath, Hartwig Hubel

Abstract:

In case of over-elastic and cyclic loading, strain may accumulate due to a ratcheting mechanism until the state of shakedown is possibly achieved. Load history dependent numerical investigations by a step-by-step analysis are rather costly in terms of engineering time and numerical effort. In the case of multi-parameter loading, where various independent loadings affect the final state of shakedown, the computational effort becomes an additional challenge. Therefore, direct methods like the Simplified Theory of Plastic Zones (STPZ) are developed to solve the problem with a few linear elastic analyses. Post-shakedown quantities such as strain ranges and cyclic accumulated strains are calculated approximately by disregarding the load history. The STPZ is based on estimates of a transformed internal variable, which can be used to perform modified elastic analyses, where the elastic material parameters are modified, and initial strains are applied as modified loading, resulting in residual stresses and strains. The STPZ already turned out to work well with respect to cyclic loading between two states of loading. Usually, few linear elastic analyses are sufficient to obtain a good approximation to the post-shakedown quantities. In a multi-dimensional load domain, the approximation of the transformed internal variable transforms from a plane problem into a hyperspace problem, where time-consuming approximation methods need to be applied. Therefore, a solution restricted to structures with four stress components was developed to estimate the transformed internal variable by means of three-dimensional vector algebra. This paper presents the extension to cyclic multi-parameter loading so that an unlimited number of load cases can be taken into account. The theoretical basis and basic presumptions of the Simplified Theory of Plastic Zones are outlined for the case of elastic shakedown. The extension of the method to many load cases is explained, and a workflow of the procedure is illustrated. An example, adopting the FE-implementation of the method into ANSYS and considering multilinear hardening is given which highlights the advantages of the method compared to incremental, step-by-step analysis.

Keywords: cyclic loading, direct method, elastic shakedown, multi-parameter loading, STPZ

Procedia PDF Downloads 150
15048 Chinese Sentence Level Lip Recognition

Authors: Peng Wang, Tigang Jiang

Abstract:

The computer based lip reading method of different languages cannot be universal. At present, for the research of Chinese lip reading, whether the work on data sets or recognition algorithms, is far from mature. In this paper, we study the Chinese lipreading method based on machine learning, and propose a Chinese Sentence-level lip-reading network (CNLipNet) model which consists of spatio-temporal convolutional neural network(CNN), recurrent neural network(RNN) and Connectionist Temporal Classification (CTC) loss function. This model can map variable-length sequence of video frames to Chinese Pinyin sequence and is trained end-to-end. More over, We create CNLRS, a Chinese Lipreading Dataset, which contains 5948 samples and can be shared through github. The evaluation of CNLipNet on this dataset yielded a 41% word correct rate and a 70.6% character correct rate. This evaluation result is far superior to the professional human lip readers, indicating that CNLipNet performs well in lipreading.

Keywords: lipreading, machine learning, spatio-temporal, convolutional neural network, recurrent neural network

Procedia PDF Downloads 110
15047 Vision Aided INS for Soft Landing

Authors: R. Sri Karthi Krishna, A. Saravana Kumar, Kesava Brahmaji, V. S. Vinoj

Abstract:

The lunar surface may contain rough and non-uniform terrain with dips and peaks. Soft-landing is a method of landing the lander on the lunar surface without any damage to the vehicle. This project focuses on finding a safe landing site for the vehicle by developing a method for the lateral velocity determination of the lunar lander. This is done by processing the real time images obtained by means of an on-board vision sensor. The hazard avoidance phase of the soft-landing starts when the vehicle is about 200 m above the lunar surface. Here, the lander has a very low velocity of about 10 cm/s:vertical and 5 m/s:horizontal. On the detection of a hazard the lander is navigated by controlling the vertical and lateral velocity. In order to find an appropriate landing site and to accordingly navigate, the lander image processing is performed continuously. The images are taken continuously until the landing site is determined, and the lander safely lands on the lunar surface. By integrating this vision-based navigation with the INS a better accuracy for the soft-landing of the lunar lander can be obtained.

Keywords: vision aided INS, image processing, lateral velocity estimation, materials engineering

Procedia PDF Downloads 447
15046 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 76