Search results for: Fuzzy Logic estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2978

Search results for: Fuzzy Logic estimation

188 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 123
187 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics

Authors: Janne Engblom, Elias Oikarinen

Abstract:

A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.

Keywords: dynamic model, fixed effects, panel data, price dynamics

Procedia PDF Downloads 1510
186 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 40
185 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 217
184 Neuropharmacological and Neurochemical Evaluation of Methanolic Extract of Elaeocarpus sphaericus (Gaertn.) Stem Bark by Using Multiple Behaviour Models of Mice

Authors: Jaspreet Kaur, Parminder Nain, Vipin Saini, Sumitra Dahiya

Abstract:

Elaeocarpus sphaericus has been traditionally used in the Indian traditional medicine system for the treatment of stress, anxiety, depression, palpitation, epilepsy, migraine and lack of concentration. The study was investigated to evaluate the neurological potential such as anxiolytic, muscle relaxant and sedative activity of methanolic extract of Elaeocarpus sphaericus stem bark (MEESSB) in mice. Preliminary phytochemical screening and acute oral toxicity of MEESSB was carried out by using standard methods. The anxiety was induced by employing Elevated Plus-Maze (EPM), Light and Dark Test (LDT), Open Field Test (OFT) and Social Interaction test (SIT). The motor coordination and sedative effect was also observed by using actophotometer, rota-rod apparatus and ketamine-induced sleeping time, respectively. Animals were treated with different doses of MEESSB (i.e.100, 200, 400 and 800 mg/kg orally) and diazepam (2 mg/kg i.p) for 21 days. Brain neurotransmitters like dopamine, serotonin and nor-epinephrine level were estimated by validated methods. Preliminary phytochemical analysis of the extract revealed the presence of tannins, phytosterols, steroids and alkaloids. In the acute toxicity studies, MEESSB was found to be non-toxic and with no mortality. In anxiolytic studies, the different doses of MEESSB showed a significant (p<0.05) effect on EPM and LDT. In OFT and SIT, a significant (p<0.05) increase in ambulation, rearing and social interaction time was observed. In the case of motor coordination activity, the MEESSB does not cause any significant effect on the latency to fall off from the rotarod bar as compared to the control group. Moreover, no significant effects on ketamine-induced sleep latency and total sleeping time induced by ketamine were observed. Results of neurotransmitter estimation revealed the increased concentration of dopamine, whereas the level of serotonin and nor-epinephrine was found to be decreased in the mice brain, with MEESSB at dose 800 mg/kg only. The study has validated the folkloric use of the plant as an anxiolytic in Indian traditional medicine while also suggesting potential usefulness in the treatment of stress and anxiety without causing sedation.

Keywords: anxiolytic, behavior experiments, brain neurotransmitters, elaeocarpus sphaericus

Procedia PDF Downloads 177
183 Driving Environmental Quality through Fuel Subsidy Reform in Nigeria

Authors: O. E. Akinyemi, P. O. Alege, O. O. Ajayi, L. A. Amaghionyediwe, A. A. Ogundipe

Abstract:

Nigeria as an oil-producing developing country in Africa is one of the many countries that had been subsidizing consumption of fossil fuel. Despite the numerous advantage of this policy ranging from increased energy access, fostering economic and industrial development, protecting the poor households from oil price shocks, political considerations, among others; they have been found to impose economic cost, wasteful, inefficient, create price distortions discourage investment in the energy sector and contribute to environmental pollution. These negative consequences coupled with the fact that the policy had not been very successful at achieving some of its stated objectives, led to a number of organisations and countries such as the Group of 7 (G7), World Bank, International Monetary Fund (IMF), International Energy Agency (IEA), Organisation for Economic Co-operation and Development (OECD), among others call for global effort towards reforming fossil fuel subsidies. This call became necessary in view of seeking ways to harmonise certain existing policies which may by design hamper current effort at tackling environmental concerns such as climate change. This is in addition to driving a green growth strategy and low carbon development in achieving sustainable development. The energy sector is identified to play a vital role. This study thus investigates the prospects of using fuel subsidy reform as a viable tool in driving an economy that de-emphasizes carbon growth in Nigeria. The method used is the Johansen and Engle-Granger two-step Co-integration procedure in order to investigate the existence or otherwise of a long-run equilibrium relationship for the period 1971 to 2011. Its theoretical framework is rooted in the Environmental Kuznet Curve (EKC) hypothesis. In developing three case scenarios (case of subsidy payment, no subsidy payment and effective subsidy), findings from the study supported evidence of a long run sustainable equilibrium model. Also, estimation results reflected that the first and the second scenario do not significantly influence the indicator of environmental quality. The implication of this is that in reforming fuel subsidy to drive environmental quality for an economy like Nigeria, strong and effective regulatory framework (measure that was interacted with fuel subsidy to yield effective subsidy) is essential.

Keywords: environmental quality, fuel subsidy, green growth, low carbon growth strategy

Procedia PDF Downloads 328
182 Intensive Neurophysiological Rehabilitation System: New Approach for Treatment of Children with Autism

Authors: V. I. Kozyavkin, L. F. Shestopalova, T. B. Voloshyn

Abstract:

Introduction: Rehabilitation of children with Autism is the issue of the day in psychiatry and neurology. It is attributed to constantly increasing quantity of autistic children - Autistic Spectrum Disorders (ASD) Existing rehabilitation approaches in treatment of children with Autism improve their medico- social and social- psychological adjustment. Experience of treatment for different kinds of Autistic disorders in International Clinic of Rehabilitation (ICR) reveals the necessity of complex intensive approach for healing this malady and wider implementation of a Kozyavkin method for treatment of children with ASD. Methods: 19 children aged from 3 to 14 years were examined. They were diagnosed ‘Autism’ (F84.0) with comorbid neurological pathology (from pyramidal insufficiency to para- and tetraplegia). All patients underwent rehabilitation in ICR during two weeks, where INRS approach was used. INRS included methods like biomechanical correction of the spine, massage, physical therapy, joint mobilization, wax-paraffin applications. They were supplemented by art- therapy, ergotherapy, rhythmical group exercises, computer game therapy, team Olympic games and other methods for improvement of motivation and social integration of the child. Estimation of efficacy was conducted using parent’s questioning and done twice- on the onset of INRS rehabilitation course and two weeks afterward. For efficacy assessment of rehabilitation of autistic children in ICR standardized tool was used, namely Autism Treatment Evaluation Checklist (ATEC). This scale was selected because any rehabilitation approaches for the child with Autism can be assessed using it. Results: Before the onset of INRS treatment mean score according to ATEC scale was 64,75±9,23, it reveals occurrence in examined children severe communication, speech, socialization and behavioral impairments. After the end of the rehabilitation course, the mean score was 56,5±6,7, what indicates positive dynamics in comparison to the onset of rehabilitation. Generally, improvement of psychoemotional state occurred in 90% of cases. Most significant changes occurred in the scope of speech (16,5 before and 14,5 after the treatment), socialization (15.1 before and 12,5 after) and behavior (20,1 before and 17.4 after). Conclusion: As a result of INRS rehabilitation course reduction of autistic symptoms was noted. Particularly improvements in speech were observed (children began to spell out new syllables, words), there was some decrease in signs of destructiveness, quality of contact with the surrounding people improved, new skills of self-service appeared. The prospect of the study is further, according to evidence- based medicine standards, deeper examination of INRS and assessment of its usefulness in treatment for Autism and ASD.

Keywords: intensive neurophysiological rehabilitation system (INRS), international clinic od rehabilitation, ASD, rehabilitation

Procedia PDF Downloads 169
181 Digital Image Correlation: Metrological Characterization in Mechanical Analysis

Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano

Abstract:

The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.

Keywords: accuracy, deformation, image correlation, mechanical analysis

Procedia PDF Downloads 311
180 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 241
179 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 550
178 Signaling Theory: An Investigation on the Informativeness of Dividends and Earnings Announcements

Authors: Faustina Masocha, Vusani Moyo

Abstract:

For decades, dividend announcements have been presumed to contain important signals about the future prospects of companies. Similarly, the same has been presumed about management earnings announcements. Despite both dividend and earnings announcements being considered informative, a number of researchers questioned their credibility and found both to contain short-term signals. Pertaining to dividend announcements, some authors argued that although they might contain important information that can result in changes in share prices, which consequently results in the accumulation of abnormal returns, their degree of informativeness is less compared to other signaling tools such as earnings announcements. Yet, this claim in favor has been refuted by other researchers who found the effect of earnings to be transitory and of little value to shareholders as indicated by the little abnormal returns earned during the period surrounding earnings announcements. Considering the above, it is apparent that both dividends and earnings have been hypothesized to have a signaling impact. This prompts one to question which between these two signaling tools is more informative. To answer this question, two follow-up questions were asked. The first question sought to determine the event which results in the most effect on share prices, while the second question focused on the event that influenced trading volume the most. To answer the first question and evaluate the effect that each of these events had on share prices, an event study methodology was employed on a sample made up of the top 10 JSE-listed companies for data collected from 2012 to 2019 to determine if shareholders gained abnormal returns (ARs) during announcement dates. The event that resulted in the most persistent and highest amount of ARs was considered to be more informative. Looking at the second follow-up question, an investigation was conducted to determine if either dividends or earnings announcements influenced trading patterns, resulting in abnormal trading volumes (ATV) around announcement time. The event that resulted in the most ATV was considered more informative. Using an estimation period of 20 days and an event window of 21 days, and hypothesis testing, it was found that announcements pertaining to the increase of earnings resulted in the most ARs, Cumulative Abnormal Returns (CARs) and had a lasting effect in comparison to dividend announcements whose effect lasted until day +3. This solidifies some empirical arguments that the signaling effect of dividends has become diminishing. It was also found that when reported earnings declined in comparison to the previous period, there was an increase in trading volume, resulting in ATV. Although dividend announcements did result in abnormal returns, they were lesser than those acquired during earnings announcements which refutes a number of theoretical and empirical arguments that found dividends to be more informative than earnings announcements.

Keywords: dividend signaling, event study methodology, information content of earnings, signaling theory

Procedia PDF Downloads 175
177 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health

Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik

Abstract:

Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.

Keywords: ecology, morbidity, population, lag time

Procedia PDF Downloads 82
176 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 100
175 Estimating Understory Species Diversity of West Timor Tropical Savanna, Indonesia: The Basis for Planning an Integrated Management of Agricultural and Environmental Weeds and Invasive Species

Authors: M. L. Gaol, I. W. Mudita

Abstract:

Indonesia is well known as a country covered by lush tropical rain forests, but in fact, the northeastern part of the country, within the areas geologically known as Lesser Sunda, the dominant vegetation is tropical savanna. Lesser Sunda is a chain of islands located closer to Australia than to islands in the other parts of the country. Among those of islands in the chain which is closes to Australia, and thereby most strongly affected by the hot and dry Australian climate, is the island of Timor, the western part of which belongs to Indonesia and the eastern part is a sovereign state East Timor. Regardless of being the most dominant vegetation cover, tropical savanna in West Timor, especially its understory, is rarely investigated. This research was therefore carried out to investigate the structure, composition and diversity of the understory of this tropical savanna as the basis for looking at the possibility of introducing other spesieis for various purposes. For this research, 14 terrestrial communities representing major types of the existing savannas in West Timor was selected with aid of the most recently available satellite imagery. At each community, one stand of the size of 50 m x 50 m most likely representing the community was as the site of observation for the type of savanna under investigation. At each of the 14 communities, 20 plots of 1 m x 1 m in size was placed at random to identify understory species and to count the total number of individuals and to estimate the cover of each species. Based on such counts and estimation, the important value of each species was later calculated. The results of this research indicated that the understory of savanna in West Timor consisted of 73 understory species. Of this number of species, 18 species are grasses and 55 are non-grasses. Although lower than non-grass species, grass species indeed dominated the savanna as indicated by their number of individuals (65.33 vs 34.67%), species cover (57.80 vs 42.20%), and important value (123.15 vs 76.85). Of the 14 communities, the lowest density of grass was 13.50/m2 and the highest was 417.50/m2. Of 18 grass species found, all were commonly found as agricultural weeds, whereas of 55 non-grass, 10 species were commonly found as agricultural weeds, environmental weeds, or invasive species. In terms of better managing the savanna in the region, these findings provided the basis for planning a more integrated approach in managing such agricultural and environmental weeds as well as invasive species by considering the structure, composition, and species diversity of the understory species existing in each site. These findings also provided the basis for better understanding the flora of the region as a whole and for developing a flora database of West Timor in future.

Keywords: tropical savanna, understory species, integrated management, weedy and invasive species

Procedia PDF Downloads 136
174 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array

Authors: Yanping Liao, Zenan Wu, Ruigang Zhao

Abstract:

Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is ​​performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues ​​of the noise subspace, improve the divergence of small eigenvalues ​​in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.

Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust

Procedia PDF Downloads 131
173 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States

Authors: Tamanna Rimi

Abstract:

A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.

Keywords: migration, network effect, risk attitude, U.S. market

Procedia PDF Downloads 164
172 Family Firm Internationalization: Identification of Alternative Success Pathways

Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser

Abstract:

In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.

Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth

Procedia PDF Downloads 242
171 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 151
170 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 158
169 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS

Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan

Abstract:

Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.

Keywords: bearing force, frictional force, finite element analysis, ANSYS

Procedia PDF Downloads 334
168 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 266
167 Dynamic Analysis of Commodity Price Fluctuation and Fiscal Management in Sub-Saharan Africa

Authors: Abidemi C. Adegboye, Nosakhare Ikponmwosa, Rogers A. Akinsokeji

Abstract:

For many resource-rich developing countries, fiscal policy has become a key tool used for short-run fiscal management since it is considered as playing a critical role in injecting part of resource rents into the economies. However, given its instability, reliance on revenue from commodity exports renders fiscal management, budgetary planning and the efficient use of public resources difficult. In this study, the linkage between commodity prices and fiscal operations among a sample of commodity-exporting countries in sub-Saharan Africa (SSA) is investigated. The main question is whether commodity price fluctuations affects the effectiveness of fiscal policy as a macroeconomic stabilization tool in these countries. Fiscal management effectiveness is considered as the ability of fiscal policy to react countercyclically to output gaps in the economy. Fiscal policy is measured as the ratio of fiscal deficit to GDP and the ratio of government spending to GDP, output gap is measured as a Hodrick-Prescott filter of output growth for each country, while commodity prices are associated with each country based on its main export commodity. Given the dynamic nature of fiscal policy effects on the economy overtime, a dynamic framework is devised for the empirical analysis. The panel cointegration and error correction methodology is used to explain the relationships. In particular, the study employs the panel ECM technique to trace short-term effects of commodity prices on fiscal management and also uses the fully modified OLS (FMOLS) technique to determine the long run relationships. These procedures provide sufficient estimation of the dynamic effects of commodity prices on fiscal policy. Data used cover the period 1992 to 2016 for 11 SSA countries. The study finds that the elasticity of the fiscal policy measures with respect to the output gap is significant and positive, suggesting that fiscal policy is actually procyclical among the countries in the sample. This implies that fiscal management for these countries follows the trend of economic performance. Moreover, it is found that fiscal policy has not performed well in delivering macroeconomic stabilization for these countries. The difficulty in applying fiscal stabilization measures is attributable to the unstable revenue inflows due to the highly volatile nature of commodity prices in the international market. For commodity-exporting countries in SSA to improve fiscal management, therefore, fiscal planning should be largely decoupled from commodity revenues, domestic revenue bases must be improved, and longer period perspectives in fiscal policy management are the critical suggestions in this study.

Keywords: commodity prices, ECM, fiscal policy, fiscal procyclicality, fully modified OLS, sub-saharan africa

Procedia PDF Downloads 166
166 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 152
165 Some Quality Parameters of Selected Maize Hybrids from Serbia for the Production of Starch, Bioethanol and Animal Feed

Authors: Marija Milašinović-Šeremešić, Valentina Semenčenko, Milica Radosavljević, Dušanka Terzić, Ljiljana Mojović, Ljubica Dokić

Abstract:

Maize (Zea mays L.) is one of the most important cereal crops, and as such, one of the most significant naturally renewable carbohydrate raw materials for the production of energy and multitude of different products. The main goal of the present study was to investigate a suitability of selected maize hybrids of different genetic background produced in Maize Research Institute ‘Zemun Polje’, Belgrade, Serbia, for starch, bioethanol and animal feed production. All the hybrids are commercial and their detailed characterization is important for the expansion of their different uses. The starches were isolated by using a 100-g laboratory maize wet-milling procedure. Hydrolysis experiments were done in two steps (liquefaction with Termamyl SC, and saccharification with SAN Extra L). Starch hydrolysates obtained by the two-step hydrolysis of the corn flour starch were subjected to fermentation by S. cerevisiae var. ellipsoideus under semi-anaerobic conditions. The digestibility based on enzymatic solubility was performed by the Aufréré method. All investigated ZP maize hybrids had very different physical characteristics and chemical composition which could allow various possibilities of their use. The amount of hard (vitreous) and soft (floury) endosperm in kernel is considered one of the most important parameters that can influence the starch and bioethanol yields. Hybrids with a lower test weight and density and a greater proportion of soft endosperm fraction had a higher yield, recovery and purity of starch. Among the chemical composition parameters only starch content significantly affected the starch yield. Starch yields of studied maize hybrids ranged from 58.8% in ZP 633 to 69.0% in ZP 808. The lowest bioethanol yield of 7.25% w/w was obtained for hybrid ZP 611k and the highest by hybrid ZP 434 (8.96% w/w). A very significant correlation was determined between kernel starch content and the bioethanol yield, as well as volumetric productivity (48h) (r=0.66). Obtained results showed that the NDF, ADF and ADL contents in the whole maize plant of the observed ZP maize hybrids varied from 40.0% to 60.1%, 18.6% to 32.1%, and 1.4% to 3.1%, respectively. The difference in the digestibility of the dry matter of the whole plant among hybrids (ZP 735 and ZP 560) amounted to 18.1%. Moreover, the differences in the contents of the lignocelluloses fraction affected the differences in dry matter digestibility. From the results it can be concluded that genetic background of the selected maize hybrids plays an important part in estimation of the technological value of maize hybrids for various purposes. Obtained results are of an exceptional importance for the breeding programs and selection of potentially most suitable maize hybrids for starch, bioethanol and animal feed production.

Keywords: bioethanol, biomass quality, maize, starch

Procedia PDF Downloads 222
164 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 223
163 Digitalization, Economic Growth and Financial Sector Development in Africa

Authors: Abdul Ganiyu Iddrisu

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth, and reducing poverty. Yet compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, low-income flows among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector, however, empirical evidence on digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We therefore argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa focusing on the role of digitization, and financial sector development. First, we assess how digitization influence financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on 2 economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improves economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, economic growth, financial sector development, Africa

Procedia PDF Downloads 103
162 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 180
161 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 163
160 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles

Authors: Nozar Kishi, Babak Kamrani, Filmon Habte

Abstract:

Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.

Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM

Procedia PDF Downloads 271
159 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050

Authors: Farzaneh Sasanpour, Saeed Amini Varaki

Abstract:

Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.

Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran

Procedia PDF Downloads 68