Search results for: double nonlinear predictive controller
2478 Comparison of the Efficacy of Ketamine-Propofol versus Thiopental Sodium-Fentanyl in Procedural Sedation in the Emergency Department: A Randomized Double-Blind Clinical Trial
Authors: Maryam Bahreini, Mostafa Talebi Garekani, Fatemeh Rasooli, Atefeh Abdollahi
Abstract:
Introduction: Procedural sedation and analgesia have been desirable to handle painful procedures. The trend to find the agent with more efficacy and less complications is still controversial; thus, many sedative regimens have been studied. This study tried to assess the effectiveness and adverse effects of thiopental sodium-fentanyl with the known medication, ketamine-propofol for procedural sedation in the emergency department. Methods: Consenting patients were enrolled in this randomized double-blind trial to receive either 1:1 ketamine-propofol (KP) or thiopental-fentanyl (TF) 1:1 mg: Mg proportion on a weight-based dosing basis to reach the sedation level of American Society of Anesthesiologist class III/IV. The respiratory and hemodynamic complications, nausea and vomiting, recovery agitation, patient recall and satisfaction, provider satisfaction and recovery time were compared. The study was registered in Iranian randomized Control Trial Registry (Code: IRCT2015111325025N1). Results: 96 adult patients were included and randomized, 47 in the KP group and 49 in the TF group. 2.1% in the KP group and 8.1 % in the TF group experienced transient hypoxia leading to performing 4.2 % versus 8.1 % airway maneuvers for 2 groups, respectively; however, no statistically significant difference was observed between 2 combinations, and there was no report of endotracheal placement or further admission. Patient and physician satisfaction were significantly higher in the KP group. There was no difference in respiratory, gastrointestinal, cardiovascular and psychiatric adverse events, recovery time and patient recall of the procedure between groups. The efficacy and complications were not related to the type of procedure or patients’ smoking or addiction trends. Conclusion: Ketamine-propofol and thiopental-fentanyl combinations were effectively comparable although KP resulted in higher patient and provider satisfaction. It is estimated that thiopental fentanyl combination can be as potent and efficacious as ketofol with relatively similar incidence of adverse events in procedural sedation.Keywords: adverse effects, conscious sedation, fentanyl, propofol, ketamine, safety, thiopental
Procedia PDF Downloads 2192477 Modeling and System Identification of a Variable Excited Linear Direct Drive
Authors: Heiko Weiß, Andreas Meister, Christoph Ament, Nils Dreifke
Abstract:
Linear actuators are deployed in a wide range of applications. This paper presents the modeling and system identification of a variable excited linear direct drive (LDD). The LDD is designed based on linear hybrid stepper technology exhibiting the characteristic tooth structure of mover and stator. A three-phase topology provides the thrust force caused by alternating strengthening and weakening of the flux of the legs. To achieve best possible synchronous operation, the phases are commutated sinusoidal. Despite the fact that these LDDs provide high dynamics and drive forces, noise emission limits their operation in calm workspaces. To overcome this drawback an additional excitation of the magnetic circuit is introduced to LDD using additional enabling coils instead of permanent magnets. The new degree of freedom can be used to reduce force variations and related noise by varying the excitation flux that is usually generated by permanent magnets. Hence, an identified simulation model is necessary to analyze the effects of this modification. Especially the force variations must be modeled well in order to reduce them sufficiently. The model can be divided into three parts: the current dynamics, the mechanics and the force functions. These subsystems are described with differential equations or nonlinear analytic functions, respectively. Ordinary nonlinear differential equations are derived and transformed into state space representation. Experiments have been carried out on a test rig to identify the system parameters of the complete model. Static and dynamic simulation based optimizations are utilized for identification. The results are verified in time and frequency domain. Finally, the identified model provides a basis for later design of control strategies to reduce existing force variations.Keywords: force variations, linear direct drive, modeling and system identification, variable excitation flux
Procedia PDF Downloads 3702476 Agreement between Basal Metabolic Rate Measured by Bioelectrical Impedance Analysis and Estimated by Prediction Equations in Obese Groups
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Basal metabolic rate (BMR) is widely used and an accepted measure of energy expenditure. Its principal determinant is body mass. However, this parameter is also correlated with a variety of other factors. The objective of this study is to measure BMR and compare it with the values obtained from predictive equations in adults classified according to their body mass index (BMI) values. 276 adults were included into the scope of this study. Their age, height and weight values were recorded. Five groups were designed based on their BMI values. First group (n = 85) was composed of individuals with BMI values varying between 18.5 and 24.9 kg/m2. Those with BMI values varying from 25.0 to 29.9 kg/m2 constituted Group 2 (n = 90). Individuals with 30.0-34.9 kg/m2, 35.0-39.9 kg/m2, > 40.0 kg/m2 were included in Group 3 (n = 53), 4 (n = 28) and 5 (n = 20), respectively. The most commonly used equations to be compared with the measured BMR values were selected. For this purpose, the values were calculated by the use of four equations to predict BMR values, by name, introduced by Food and Agriculture Organization (FAO)/World Health Organization (WHO)/United Nations University (UNU), Harris and Benedict, Owen and Mifflin. Descriptive statistics, ANOVA, post-Hoc Tukey and Pearson’s correlation tests were performed by a statistical program designed for Windows (SPSS, version 16.0). p values smaller than 0.05 were accepted as statistically significant. Mean ± SD of groups 1, 2, 3, 4 and 5 for measured BMR in kcal were 1440.3 ± 210.0, 1618.8 ± 268.6, 1741.1 ± 345.2, 1853.1 ± 351.2 and 2028.0 ± 412.1, respectively. Upon evaluation of the comparison of means among groups, differences were highly significant between Group 1 and each of the remaining four groups. The values were increasing from Group 2 to Group 5. However, differences between Group 2 and Group 3, Group 3 and Group 4, Group 4 and Group 5 were not statistically significant. These insignificances were lost in predictive equations proposed by Harris and Benedict, FAO/WHO/UNU and Owen. For Mifflin, the insignificance was limited only to Group 4 and Group 5. Upon evaluation of the correlations of measured BMR and the estimated values computed from prediction equations, the lowest correlations between measured BMR and estimated BMR values were observed among the individuals within normal BMI range. The highest correlations were detected in individuals with BMI values varying between 30.0 and 34.9 kg/m2. Correlations between measured BMR values and BMR values calculated by FAO/WHO/UNU as well as Owen were the same and the highest. In all groups, the highest correlations were observed between BMR values calculated from Mifflin and Harris and Benedict equations using age as an additional parameter. In conclusion, the unique resemblance of the FAO/WHO/UNU and Owen equations were pointed out. However, mean values obtained from FAO/WHO/UNU were much closer to the measured BMR values. Besides, the highest correlations were found between BMR calculated from FAO/WHO/UNU and measured BMR. These findings suggested that FAO/WHO/UNU was the most reliable equation, which may be used in conditions when the measured BMR values are not available.Keywords: adult, basal metabolic rate, fao/who/unu, obesity, prediction equations
Procedia PDF Downloads 1342475 FLC with 3DSVM for 4LEG 4WIRE Shunt Active Power Filter
Authors: Abdelhalim Kessal, Ali Chebabhi
Abstract:
In this paper, a controller based on fuzzy logic control (FLC) associated to Three Dimensional Space Vector Modulation (3DSVM) is applied for shunt active filter in αβo axes domain. The main goals are to improve power quality under disturbed loads, minimize source currents harmonics and reduce neutral current magnitude in the four-wire structure. FLC is used to obtain the reference current and control the DC-bus voltage at the inverter output. The switching signals of the four-leg inverter are generating through a Three Dimensional Space Vector Modulation (3DSVM). Selected simulation results have been shown to validate the proposed system.Keywords: flc, 3dsvm, sapf, harmonic, inverter
Procedia PDF Downloads 4992474 The Problem of Now in Special Relativity Theory
Authors: Mogens Frank Mikkelsen
Abstract:
Special Relativity Theory (SRT) includes only one characteristic of light, the speed is equal to all observers, and by excluding other relevant characteristics of light, the common interpretation of SRT should be regarded as merely an approximative theory. By rethinking the iconic double light cones, a revised version of SRT can be developed. The revised concept of light cones acknowledges an asymmetry of past and future light cones and introduced a concept of the extended past to explain the predictions as something other than the future. Combining this with the concept of photon-paired events, led to the inference that Special Relativity theory can support the existence of Now.Keywords: relativity, light cone, Minkowski, time
Procedia PDF Downloads 882473 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 2502472 Reliability Analysis of Heat Exchanger Cycle Using Non-Parametric Method
Authors: Apurv Kulkarni, Shreyas Badave, B. Rajiv
Abstract:
Non-parametric reliability technique is useful for assessment of reliability of systems for which failure rates are not available. This is useful when detection of malfunctioning of any component is the key purpose during ongoing operation of the system. The main purpose of the Heat Exchanger Cycle discussed in this paper is to provide hot water at a constant temperature for longer periods of time. In such a cycle, certain components play a crucial role and this paper presents an effective way to predict the malfunctioning of the components by determination of system reliability. The method discussed in the paper is feasible and this is clarified with the help of various test cases.Keywords: heat exchanger cycle, k-statistics, PID controller, system reliability
Procedia PDF Downloads 3912471 Serious Digital Video Game for Solving Algebraic Equations
Authors: Liliana O. Martínez, Juan E González, Manuel Ramírez-Aranda, Ana Cervantes-Herrera
Abstract:
A serious game category mobile application called Math Dominoes is presented. The main objective of this applications is to strengthen the teaching-learning process of solving algebraic equations and is based on the board game "Double 6" dominoes. Math Dominoes allows the practice of solving first, second-, and third-degree algebraic equations. This application is aimed to students who seek to strengthen their skills in solving algebraic equations in a dynamic, interactive, and fun way, to reduce the risk of failure in subsequent courses that require mastery of this algebraic tool.Keywords: algebra, equations, dominoes, serious games
Procedia PDF Downloads 1332470 Application of Granular Computing Paradigm in Knowledge Induction
Authors: Iftikhar U. Sikder
Abstract:
This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction
Procedia PDF Downloads 5332469 Assessment of Bisphenol A and 17 α-Ethinyl Estradiol Bioavailability in Soils Treated with Biosolids
Authors: I. Ahumada, L. Ascar, C. Pedraza, J. Montecino
Abstract:
It has been found that the addition of biosolids to soil is beneficial to soil health, enriching soil with essential nutrient elements. Although this sludge has properties that allow for the improvement of the physical features and productivity of agricultural and forest soils and the recovery of degraded soils, they also contain trace elements, organic trace and pathogens that can cause damage to the environment. The application of these biosolids to land without the total reclamation and the treated wastewater can transfer these compounds into terrestrial and aquatic environments, giving rise to potential accumulation in plants. The general aim of this study was to evaluate the bioavailability of bisphenol A (BPA), and 17 α-ethynyl estradiol (EE2) in a soil-biosolid system using wheat (Triticum aestivum) plant assays and a predictive extraction method using a solution of hydroxypropyl-β-cyclodextrin (HPCD) to determine if it is a reliable surrogate for this bioassay. Two soils were obtained from the central region of Chile (Lo Prado and Chicauma). Biosolids were obtained from a regional wastewater treatment plant. The soils were amended with biosolids at 90 Mg ha-1. Soils treated with biosolids, spiked with 10 mgkg-1 of the EE2 and 15 mgkg-1 and 30 mgkg-1of BPA were also included. The BPA, and EE2 concentration were determined in biosolids, soils and plant samples through ultrasound assisted extraction, solid phase extraction (SPE) and gas chromatography coupled to mass spectrometry determination (GC/MS). The bioavailable fraction found of each one of soils cultivated with wheat plants was compared with results obtained through a cyclodextrin biosimulator method. The total concentration found in biosolid from a treatment plant was 0.150 ± 0.064 mgkg-1 and 12.8±2.9 mgkg-1 of EE2 and BPA respectively. BPA and EE2 bioavailability is affected by the organic matter content and the physical and chemical properties of the soil. The bioavailability response of both compounds in the two soils varied with the EE2 and BPA concentration. It was observed in the case of EE2, the bioavailability in wheat plant crops contained higher concentrations in the roots than in the shoots. The concentration of EE2 increased with increasing biosolids rate. On the other hand, for BPA, a higher concentration was found in the shoot than the roots of the plants. The predictive capability the HPCD extraction was assessed using a simple linear correlation test, for both compounds in wheat plants. The correlation coefficients for the EE2 obtained from the HPCD extraction with those obtained from the wheat plants were r= 0.99 and p-value ≤ 0.05. On the other hand, in the case of BPA a correlation was not found. Therefore, the methodology was validated with respect to wheat plants bioassays, only in the EE2 case. Acknowledgments: The authors thank FONDECYT 1150502.Keywords: emerging compounds, bioavailability, biosolids, endocrine disruptors
Procedia PDF Downloads 1492468 Introducing an Innovative Structural Fuse for Creation of Repairable Buildings with See-Saw Motion during Earthquake and Investigating It by Nonlinear Finite Element Modeling
Authors: M. Hosseini, N. Ghorbani Amirabad, M. Zhian
Abstract:
Seismic design codes accept structural and nonstructural damages after the sever earthquakes (provided that the building is prevented from collapse), so that in many cases demolishing and reconstruction of the building is inevitable, and this is usually very difficult, costly and time consuming. Therefore, designing and constructing of buildings in such a way that they can be easily repaired after earthquakes, even major ones, is quite desired. For this purpose giving the possibility of rocking or see-saw motion to the building structure, partially or as a whole, has been used by some researchers in recent decade .the central support which has a main role in creating the possibility of see-saw motion in the building’s structural system. In this paper, paying more attention to the key role of the central fuse and support, an innovative energy dissipater which can act as the central fuse and support of the building with seesaw motion is introduced, and the process of reaching an optimal geometry for that by using finite element analysis is presented. Several geometric shapes were considered for the proposed central fuse and support. In each case the hysteresis moment rotation behavior of the considered fuse were obtained under simultaneous effect of vertical and horizontal loads, by nonlinear finite element analyses. To find the optimal geometric shape, the maximum plastic strain value in the fuse body was considered as the main parameter. The rotational stiffness of the fuse under the effect of acting moments is another important parameter for finding the optimum shape. The proposed fuse and support can be called Yielding Curved Bars and Clipped Hemisphere Core (YCB&CHC or more briefly YCB) energy dissipater. Based on extensive nonlinear finite element analyses it was found out the using rectangular section for the curved bars gives more reliable results. Then, the YCB energy dissipater with the optimal shape was used in a structural model of a 12 story regular building as its central fuse and support to give it the possibility of seesaw motion, and its seismic responses were compared to those of a the building in the fixed based conditions, subjected to three-components acceleration of several selected earthquakes including Loma Prieta, Northridge, and Park Field. In building with see-saw motion some simple yielding-plate energy dissipaters were also used under circumferential columns.The results indicated that equipping the buildings with central and circumferential fuses result in remarkable reduction of seismic responses of the building, including the base shear, inter story drift, and roof acceleration. In fact by using the proposed technique the plastic deformations are concentrated in the fuses in the lowest story of the building, so that the main body of the building structure remains basically elastic, and therefore, the building can be easily repaired after earthquake.Keywords: rocking mechanism, see-saw motion, finite element analysis, hysteretic behavior
Procedia PDF Downloads 4102467 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values
Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou
Abstract:
A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings
Procedia PDF Downloads 3292466 Comparison of Physical and Chemical Effects on Senescent Cells
Authors: Svetlana Guryeva, Inna Kornienko, Andrey Usanov, Dmitry Usanov, Elena Petersen
Abstract:
Every day cells in our organism are exposed to various factors: chemical agents, reactive oxygen species, ionizing radiation, and others. These factors can cause damage to DNA, cellular membrane, intracellular compartments, and proteins. The fate of cells depends on the exposure intensity and duration. The prolonged and intense exposure causes the irreversible damage accumulation, which triggers the permanent cell cycle arrest (cellular senescence) or cell death programs. In the case of low dose of impacts, it can lead to cell renovation and to cell functional state improvement. Therefore, it is a pivotal question to investigate the factors and doses that result in described positive effects. In order to estimate the influence of different agents, the proliferation index and levels of cell death markers (annexin V/propidium iodide), senescence-associated β-galactosidase, and lipofuscin were measured. The experiments were conducted on primary human fibroblasts of the 8th passage. According to the levels of mentioned markers, these cells were defined as senescent cells. The effect of low-frequency magnetic field was investigated. Different modes of magnetic field exposure were tested. The physical agents were compared with chemical agents: metformin (10 mM) and taurine (0.8 mM and 1.6 mM). Cells were incubating with chemicals for 5 days. The highest decrease in the level of senescence-associated β-galactosidase (21%) and lipofuscin (17%) was observed in the primary senescent fibroblasts after 5 days after double treatments with 48 h intervals with low-frequency magnetic field. There were no significant changes in the proliferation index after magnetic field application. The cytotoxic effect of magnetic field was not observed. The chemical agent taurine (1.6 mM) decreased the level of senescence-associated β-galactosidase (23%) and lipofuscin (22%). Metformin improved the activity of senescence-associated β-galactosidase on 15% and the level of lipofuscin on 19% in this experiment. According to these results, the effect of double treatment with 48 h interval with low-frequency magnetic field and the effect of taurine (1.6 mM) were comparable to the effect of metformin, for which anti-aging properties are proved. In conclusion, this study can become the first step towards creation of the standardized system for the investigation of different effects on senescent cells.Keywords: biomarkers, magnetic field, metformin, primary fibroblasts, senescence, taurine
Procedia PDF Downloads 2832465 Electromagnetic Energy Harvesting by Using a Rectenna with a Metamaterial Lens
Authors: Ursula D. C. Resende, Fabiano S. Bicalho, Sandro T. M. Gonçalves
Abstract:
The growing demand for cheap and clean energy sources have been motivated by the study and development of distinct technologies and devices able to provide different amounts of energy. In order to supply energy for small loads, the energy from the electromagnetic spectrum can be harvested. This possibility is particularly interesting because this kind of energy is constantly available in the environment and the number of radiofrequency sources is permanently increasing, due to advances in telecommunications services. A rectenna, which is a combination of an antenna and a rectifier circuit, is an equipment that can efficiently perform the electromagnetic energy harvesting. However, since the amount of electromagnetic energy available in the environment is very small, limited values of power can be harvested by the rectenna. Therefore, several technical strategies have been investigated in order to increase this amount of power. In this work, a metamaterial electromagnetic lens is used to improve the electromagnetic energy harvesting. The rectenna investigated was designed and optimized to charge a Li-Ion battery using the electromagnetic energy from an internet Wi-Fi commercial router model TL-WR841HP operating in 2.45 GHz with maximal output power equal to 18 dBm. The rectenna consists of a high directive antenna, a double voltage rectifier circuit and a metamaterial lens. The printed antenna, constituted of two rectangular radiator elements, was projected and optimized by using the Computer Simulation Software (CST) in order to obtain high directivities and values of S11 parameter below -10 dB in 2.45 GHz. The antenna was printed over a double-sided copper fiberglass substrate, FR4, with characterized relative electric permittivity εr = 4.3 and tangent of losses δ = 0.01. The rectifier circuit, which incorporates a circuit for impedance matching and uses the Schottky diode HSMS-2852, was projected and optimized by using Advanced Design Software (ADS) and built over the same FR4 substrate. The metamaterial cell is composed of two Square Split Ring Resonator (S-SRR) and a thin wire in order to operate with negative values of εr and relative magnetic permeability in 2.45 GHz. In order to evaluate the performance of the purposed rectenna two experimental charging tests were performed, one without and other with the metamaterial lens. The result obtained demonstrate that the electromagnetic lens was able to significantly increase the levels of electric current delivered to the battery, approximately 44%.Keywords: electromagnetic energy harvesting, electromagnetic lens, metamaterial, rectenna
Procedia PDF Downloads 1452464 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 1012463 Planing the Participation of Units Bound to Demand Response Programs with Regard to Ancillary Services in the PQ Power Market
Authors: Farnoosh Davarian
Abstract:
The present research focuses on organizing the cooperation of units constrained by demand response (DR) programs, considering ancillary services in the P-Q power market. Moreover, it provides a comprehensive exploration of the effects of demand reduction and redistribution across several predefined scenarios (in three pre-designed demand response programs, for example, ranging from 5% to 20%) on system voltage and losses in a smart distribution system (in the studied network, distributed energy resources (DERs) such as synchronous distributed generators and wind turbines offer their active and reactive power for the proposed market).GAMS, a specialized software for high-powered modeling, is used for optimizing linear, nonlinear, and integer programming challenges. GAMS modeling is separate from its solution method, which is a notable feature. Thus, by providing changes in the solver, it is possible to solve the model using various methods (linear, nonlinear, integer, etc.). Finally, the combined active and reactive market challenge in smart distribution systems, considering renewable distributed sources and demand response programs in GAMS, will be evaluated. The active and reactive power trading by the distribution company is carried out in the wholesale market. What is demanded is active power. By using the buy-back/payment program, it is possible for responsive loads or aggregators to participate in the market. The objective function of the proposed market is to minimize the price of active and reactive power for DERs and distribution companies and the penalty cost for CO2 emissions and the cost of the buy-back/payment program. In this research, the objective function is to minimize the cost of active and reactive power from distributed generation sources and distribution companies, the cost of carbon dioxide emissions, and the cost of the buy-back/payment program. The effectiveness of the proposed method has been evaluated in a case study.Keywords: consumer behavior, demand response, pollution cost, combined active and reactive market
Procedia PDF Downloads 122462 Engineering the Topological Insulator Structures for Terahertz Detectors
Authors: M. Marchewka
Abstract:
The article is devoted to the possible optical transitions in double quantum wells system based on HgTe/HgCd(Mn)Te heterostructures. Such structures can find applications as detectors and sources of radiation in the terahertz range. The Double Quantum Wells (DQW) systems consist of two QWs separated by the transparent for electrons barrier. Such systems look promising from the point of view of the additional degrees of freedom. In the case of the topological insulator in about 6.4nm wide HgTe QW or strained 3D HgTe films at the interfaces, the topologically protected surface states appear at the interfaces/surfaces. Electrons in those edge states move along the interfaces/surfaces without backscattering due to time-reversal symmetry. Combination of the topological properties, which was already verified by the experimental way, together with the very well know properties of the DQWs, can be very interesting from the applications point of view, especially in the THz area. It is important that at the present stage, the technology makes it possible to create high-quality structures of this type, and intensive experimental and theoretical studies of their properties are already underway. The idea presented in this paper is based on the eight-band KP model, including the additional terms related to the structural inversion asymmetry, interfaces inversion asymmetry, the influence of the magnetically content, and the uniaxial strain describe the full pictures of the possible real structure. All of this term, together with the external electric field, can be sources of breaking symmetry in investigated materials. Using the 8 band KP model, we investigated the electronic shape structure with and without magnetic field from the application point of view as a THz detector in a small magnetic field (below 2T). We believe that such structures are the way to get the tunable topological insulators and the multilayer topological insulator. Using the one-dimensional electrons at the topologically protected interface states as fast and collision-free signal carriers as charge and signal carriers, the detection of the optical signal should be fast, which is very important in the high-resolution detection of signals in the THz range. The proposed engineering of the investigated structures is now one of the important steps on the way to get the proper structures with predicted properties.Keywords: topological insulator, THz spectroscopy, KP model, II-VI compounds
Procedia PDF Downloads 1232461 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity
Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet
Abstract:
In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking
Procedia PDF Downloads 3482460 Measuring Enterprise Growth: Pitfalls and Implications
Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić
Abstract:
Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises
Procedia PDF Downloads 2532459 Fair Value Accounting and Evolution of the Ohlson Model
Authors: Mohamed Zaher Bouaziz
Abstract:
Our study examines the Ohlson Model, which links a company's market value to its equity and net earnings, in the context of the evolution of the Canadian accounting model, characterized by more extensive use of fair value and a broader measure of performance after IFRS adoption. Our hypothesis is that if equity is reported at its fair value, this valuation is closely linked to market capitalization, so the weight of earnings weakens or even disappears in the Ohlson Model. Drawing on Canada's adoption of the International Financial Reporting Standards (IFRS), our results support our hypothesis that equity appears to include most of the relevant information for investors, while earnings have become less important. However, the predictive power of earnings does not disappear.Keywords: fair value accounting, Ohlson model, IFRS adoption, value-relevance of equity and earnings
Procedia PDF Downloads 1912458 Development and Test of an Open Source PX4 Controler for omnidirectional Unmanned Surface Vehicle
Authors: Norbert Szulc, Cezary Wieczorkowski, Igor Baranowski
Abstract:
In this paper, a control system that bridges the gap in support for Unmanned Surface Vessels in the PX4 Opensource Autopilot was developed. The system is designed for an omnidirectional water craft with four motors. A modular autopilot architecture design centred around publish-subscribe interprocess communication was used. The paper presents the implementation and integration process of a generic surface vehicle controller capable of driving any configuration of motors through the recently introduced in control allocator in PX4 autopilot. The proposed approach was successfully tested in a case study through implementation on the ASV Perkoz.Keywords: control system, PX4, drones, rovers, surface vessels, omnidirectional
Procedia PDF Downloads 882457 Effect of Out-Of-Plane Deformation on Relaxation Method of Stress Concentration in a Plate
Authors: Shingo Murakami, Shinichi Enoki
Abstract:
In structures, stress concentration is a factor of fatigue fracture. Basically, the stress concentration is a phenomenon that should be avoided. However, it is difficult to avoid the stress concentration. Therefore, relaxation of the stress concentration is important. The stress concentration arises from notches and circular holes. There is a relaxation method that a composite patch covers a notch and a circular hole. This relaxation method is used to repair aerial wings, but it is not systematized. Composites are more expensive than single materials. Accordingly, we propose the relaxation method that a single material patch covers a notch and a circular hole, and aim to systematize this relaxation method. We performed FEA (Finite Element Analysis) about an object by using a three-dimensional FEA model. The object was that a patch adheres to a plate with a circular hole. And, a uniaxial tensile load acts on the patched plate with a circular hole. In the three-dimensional FEA model, it is not easy to model the adhesion layer. Basically, the yield stress of the adhesive is smaller than that of adherents. Accordingly, the adhesion layer gets to plastic deformation earlier than the adherents under the yield stress of adherents. Therefore, we propose the three-dimensional FEA model which is applied a nonlinear elastic region to the adhesion layer. The nonlinear elastic region was calculated by a bilinear approximation. We compared the analysis results with the tensile test results to confirm whether the analysis model has usefulness. As a result, the analysis results agreed with the tensile test results. And, we confirmed that the analysis model has usefulness. As a result that the three-dimensional FEA model was used to the analysis, it was confirmed that an out-of-plane deformation occurred to the patched plate with a circular hole. The out-of-plane deformation causes stress increase of the patched plate with a circular hole. Therefore, we investigate that the out-of-plane deformation affects relaxation of the stress concentration in the plate with a circular hole on this relaxation method. As a result, it was confirmed that the out-of-plane deformation inhibits relaxation of the stress concentration on the plate with a circular hole.Keywords: stress concentration, patch, out-of-plane deformation, Finite Element Analysis
Procedia PDF Downloads 2702456 Analysis of SCR-Based ESD Protection Circuit on Holding Voltage Characteristics
Authors: Yong Seo Koo, Jong Ho Nam, Yong Nam Choi, Dae Yeol Yoo, Jung Woo Han
Abstract:
This paper presents a silicon controller rectifier (SCR) based ESD protection circuit for IC. The proposed ESD protection circuit has low trigger voltage and high holding voltage compared with conventional SCR ESD protection circuit. Electrical characteristics of the proposed ESD protection circuit are simulated and analyzed using TCAD simulator. The proposed ESD protection circuit verified effective low voltage ESD characteristics with low trigger voltage and high holding voltage.Keywords: electro-static discharge (ESD), silicon controlled rectifier (SCR), holding voltage, protection circuit
Procedia PDF Downloads 3812455 Time Lag Analysis for Readiness Potential by a Firing Pattern Controller Model of a Motor Nerve System Considered Innervation and Jitter
Authors: Yuko Ishiwaka, Tomohiro Yoshida, Tadateru Itoh
Abstract:
Human makes preparation called readiness potential unconsciously (RP) before awareness of their own decision. For example, when recognizing a button and pressing the button, the RP peaks are observed 200 ms before the initiation of the movement. It has been known that the preparatory movements are acquired before actual movements, but it has not been still well understood how humans can obtain the RP during their growth. On the proposition of why the brain must respond earlier, we assume that humans have to adopt the dangerous environment to survive and then obtain the behavior to cover the various time lags distributed in the body. Without RP, humans cannot take action quickly to avoid dangerous situations. In taking action, the brain makes decisions, and signals are transmitted through the Spinal Cord to the muscles to the body moves according to the laws of physics. Our research focuses on the time lag of the neuron signal transmitting from a brain to muscle via a spinal cord. This time lag is one of the essential factors for readiness potential. We propose a firing pattern controller model of a motor nerve system considered innervation and jitter, which produces time lag. In our simulation, we adopt innervation and jitter in our proposed muscle-skeleton model, because these two factors can create infinitesimal time lag. Q10 Hodgkin Huxley model to calculate action potentials is also adopted because the refractory period produces a more significant time lag for continuous firing. Keeping constant power of muscle requires cooperation firing of motor neurons because a refractory period stifles the continuous firing of a neuron. One more factor in producing time lag is slow or fast-twitch. The Expanded Hill Type model is adopted to calculate power and time lag. We will simulate our model of muscle skeleton model by controlling the firing pattern and discuss the relationship between the time lag of physics and neurons. For our discussion, we analyze the time lag with our simulation for knee bending. The law of inertia caused the most influential time lag. The next most crucial time lag was the time to generate the action potential induced by innervation and jitter. In our simulation, the time lag at the beginning of the knee movement is 202ms to 203.5ms. It means that readiness potential should be prepared more than 200ms before decision making.Keywords: firing patterns, innervation, jitter, motor nerve system, readiness potential
Procedia PDF Downloads 8322454 Predicting the Success of Bank Telemarketing Using Artificial Neural Network
Authors: Mokrane Selma
Abstract:
The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network
Procedia PDF Downloads 1602453 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards
Authors: Golnush Masghati-Amoli, Paul Chin
Abstract:
Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering
Procedia PDF Downloads 1372452 A Research on Tourism Market Forecast and Its Evaluation
Authors: Min Wei
Abstract:
The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.Keywords: linear regression model, tourism market, forecast, tourism economics
Procedia PDF Downloads 3342451 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 1042450 Attention Problems among Adolescents: Examining Educational Environments
Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte
Abstract:
This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling
Procedia PDF Downloads 2882449 SQL Generator Based on MVC Pattern
Authors: Chanchai Supaartagorn
Abstract:
Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.Keywords: MVC, relational database, SQL, White-Box testing
Procedia PDF Downloads 422