Search results for: Penman-Monteith method (FAO-56 PM)
13299 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 16113298 Thermosonic Devulcanization of Waste Ground Rubber Tires by Quaternary Ammonium-Based Ternary Deep Eutectic Solvents and the Effect of α-Hydrogen
Authors: Ricky Saputra, Rashmi Walvekar, Mohammad Khalid
Abstract:
Landfills, water contamination, and toxic gas emission are a few impacts faced by the environment due to the increasing number of αof waste rubber tires (WRT). In spite of such concerning issue, only minimal efforts are taken to reclaim or recycle these wastes as their products are generally not-profitable for companies. Unlike the typical reclamation process, devulcanization is a method to selectively cleave sulfidic bonds within vulcanizates to avoid polymeric scissions that compromise elastomer’s mechanical and tensile properties. The process also produces devulcanizates that are re-processable similar to virgin rubber. Often, a devulcanizing agent is needed. In the current study, novel and sustainable ammonium chloride-based ternary deep eutectic solvents (TDES), with a different number of α-hydrogens, were utilised to devulcanize ground rubber tire (GRT) as an effort to implement green chemistry to tackle such issue. 40-mesh GRT were soaked for 1 day with different TDESs and sonicated at 37-80 kHz for 60-120 mins and heated at 100-140oC for 30-90 mins. Devulcanizates were then filtered, dried, and evaluated based on the percentage of by means of Flory-Rehner calculation and swelling index. The result shows that an increasing number of α-Hs increases the degree of devulcanization, and the value achieved was around eighty-percent, thirty percent higher than the typical industrial-autoclave method. Resulting bondages of devulcanizates were also analysed by Fourier transform infrared spectrometer (FTIR), Horikx fitting, and thermogravimetric analyser (TGA). The earlier two confirms only sulfidic scissions were experienced by GRT through the treatment, while the latter proves the absence or negligibility of carbon-chains scission.Keywords: ammonium, sustainable, deep eutectic solvent, α-hydrogen, waste rubber tire
Procedia PDF Downloads 12713297 Measurement of CES Production Functions Considering Energy as an Input
Authors: Donglan Zha, Jiansong Si
Abstract:
Because of its flexibility, CES attracts much interest in economic growth and programming models, and the macroeconomics or micro-macro models. This paper focuses on the development, estimating methods of CES production function considering energy as an input. We leave for future research work of relaxing the assumption of constant returns to scale, the introduction of potential input factors, and the generalization method of the optimal nested form of multi-factor production functions.Keywords: bias of technical change, CES production function, elasticity of substitution, energy input
Procedia PDF Downloads 28213296 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 18613295 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk
Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise
Abstract:
The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods
Procedia PDF Downloads 40813294 E-Learning Platform for School Kids
Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.
Abstract:
E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.Keywords: math, education games, e-learning platform, artificial intelligence
Procedia PDF Downloads 15713293 The Role of Financial and Non-Financial Institutions in Promoting Entrepreneurship in Micro small and Medium Enterprises
Authors: Lemuel David
Abstract:
The importance of the Micro, Small, and Medium Enterprises sector is well recognized for its legitimate contribution to the Macroeconomic objectives of the Republic of Liberia, like generation of employment, input t, exports, and enhancing entrepreneurship. Right now, Medium and Small enterprises accounts for about 99 percent of the industrial units in the country, contributing 60 percent of the manufacturing sector output and approximately one-third of the nation’s exports. The role of various financial institutions like ECO bank and Non-financial Institutions like Bearch Limited support promoting the growth of Micro, Small, and Medium Enterprises is unique. A small enterprise or entrepreneur gets many types of assistance from different institutions for varied purposes in the course of his entrepreneurial journey. This paper focuses on the factors related to financial institutional support and non-financial institutional support entrepreneurs to the growth of Medium and Small enterprises in the Republic of Liberia. The significance of this paper is to support Policy and Institutional Support for Medium and Small enterprises to know the views of entrepreneurs about financial and non-financial support systems in the Republic of Liberia. This study was carried out through a survey method, with the use of questionnaires. The population for this study consisted of all registered Medium and Small enterprises which have been registered during the years 2004-2014 in the republic of Liberia. The sampling method employed for this study was a simple random technique and determined a sample size of 400. Data for the study was collected using a standard questionnaire. The questionnaire consisted of two parts: the first part consisted of questions on the profile of the respondents. The second part covers (1) financial, promotional factors and (2) non-financial promotional factors. The results of the study are based on financial and non-financial supporting activities provided by institutions to Medium and Small enterprises. After investigation, it has been found that there is no difference in the support given by Financial Institutions and non-financial Institutions. Entrepreneurs perceived “collateral-free schemes and physical infrastructure support factors are highest contributing to entry and growth of Medium and Small enterprises.Keywords: micro, small, and medium enterprises financial institutions, entrepreneurship
Procedia PDF Downloads 10213292 Assessment of the Impact of Regular Pilates Exercises on Static Balance in Healthy Adult Women: Preliminary Report
Authors: Anna Słupik, Krzysztof Jaworski, Anna Mosiołek, Dariusz Białoszewski
Abstract:
Background: Maintaining the correct body balance is essential in the prevention of falls in the elderly, which is especially important for women because of postmenopausal osteoporosis and the serious consequences of falls. One of the exercise methods which is very popular among adults, and which may affect body balance in a positive way is the pilates method. The aim of the study was to evaluate the effect of regular pilates exercises on the ability to maintain body balance in static conditions in adult healthy women. Material and methods: The study group consisted of 20 healthy women attending pilates twice a week for at least 1 year. The control group consisted of 20 healthy women physically inactive. Women in the age range from 35 to 50 years old without pain in musculoskeletal system or other pain were only qualified to the groups. Body balance was assessed using MatScan VersaTek platform with Sway Analysis Module based on Matscan Clinical 6.7 software. The balance was evaluated under the following conditions: standing on both feet with eyes open, standing on both feet with eyes closed, one-leg standing (separately on the right and left foot) with eyes open. Each test lasted 30 seconds. The following parameters were calculated: estimated size of the ellipse of 95% confidence, the distance covered by the Center of Gravity (COG), the size of the maximum shift in the sagittal and frontal planes and load distribution between the left and right foot, as well as between rear- and forefoot. Results: It was found that there is significant difference between the groups in favor of the study group in the size of the confidence ellipse and maximum shifts of COG in the sagittal plane during standing on both feet, both with the eyes open and closed (p < 0.05). While standing on one leg both on the right and left leg, with eyes opened there was a significant difference in favor of the study group, in terms of the size of confidence ellipse, the size of the maximum shifts in the sagittal and in the frontal plane (p < 0.05). There were no differences between the distribution of load between the right and left foot (standing with both feet), nor between fore- and rear foot (in standing with both feet or one-leg). Conclusions: 1. Static balance in women exercising regularly by pilates method is better than in inactive women, which may in the future prevent falls and their consequences. 2. The observed differences in maintaining balance in frontal plane in one-leg standing may indicate a positive impact of pilates exercises on the ability to maintain global balance in terms of the reduced support surface. 3. Pilates method can be used as a form preventive therapy for all people who are expected to have problems with body balance in the future, for example in chronic neurological disorders or vestibular problems. 4. The results have shown that further prospective randomized research on a larger and more representative group is needed.Keywords: balance exercises, body balance, pilates, pressure distribution, women
Procedia PDF Downloads 32013291 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow
Authors: Musa Akdere, Gunnar Seide, Thomas Gries
Abstract:
Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface
Procedia PDF Downloads 27613290 Drug Delivery to Solid Tumor: Effect of Dynamic Capillary Network Induced by Tumor
Authors: Mostafa Sefidgar, Kaamran Raahemifar, Hossein Bazmara, Madjid Soltani
Abstract:
The computational methods provide condition for investigation related to the process of drug delivery, such as convection and diffusion of drug in extracellular matrices, and drug extravasation from microvascular. The information of this process clarifies the mechanisms of drug delivery from the injection site to absorption by a solid tumor. In this study, an advanced numerical method is used to solve fluid flow and solute transport equations simultaneously to show how capillary network structure induced by tumor affects drug delivery. The effect of heterogeneous capillary network induced by tumor on interstitial fluid flow and drug delivery is investigated by this multi scale method. The sprouting angiogenesis model is used for generating capillary network induced by tumor. Fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network and fluid flow in normal and tumor tissues. The Starling’s law is used for closing this system of equations and coupling the intravascular and extravascular flows. Finally, convection-diffusion-reaction equation is used to simulate drug delivery. The dynamic approach which changes the capillary network structure based on signals sent by hemodynamic and metabolic stimuli is used in this study for more realistic assumption. The study indicates that drug delivery to solid tumors depends on the tumor induced capillary network structure. The dynamic approach generates the irregular capillary network around the tumor and predicts a higher interstitial pressure in the tumor region. This elevated interstitial pressure with irregular capillary network leads to a heterogeneous distribution of drug in the tumor region similar to in vivo observations. The investigation indicates that the drug transport properties have a significant role against the physiological barrier of drug delivery to a solid tumor.Keywords: solid tumor, physiological barriers to drug delivery, angiogenesis, microvascular network, solute transport
Procedia PDF Downloads 31413289 Assessment of Sperm Aneuploidy Using Advanced Sperm Fish Technique in Infertile Patients
Authors: Archana S., Usha Rani G., Anand Balakrishnan, Sanjana R., Solomon F., Vijayalakshmi J.
Abstract:
Background: There is evidence that male factors contribute to the infertility of up to 50% of couples, who are evaluated and treated for infertility using advanced assisted reproductive technologies. Genetic abnormalities, including sperm chromosome aneuploidy as well as structural aberrations, are one of the major causes of male infertility. Recent advances in technology expedite the evaluation of sperm aneuploidy. The purpose of the study was to de-termine the prevalence of sperm aneuploidy in infertile males and the degree of association between DNA fragmentation and sperm aneuploidy. Methods: In this study, 75 infertile men were included, and they were divided into four abnormal groups (Oligospermia, Terato-spermia, Asthenospermia and Oligoasthenoteratospermia (OAT)). Men with children who were normozoospermia served as the control group. The Fluorescence in situ hybridization (FISH) method was used to test for sperm aneuploidy, and the Sperm Chromatin Dispersion Assay (SCDA) was used to measure the fragmentation of sperm DNA. Spearman's correla-tion coefficient was used to evaluate the relationship between sperm aneuploidy and sperm DNA fragmentation along with age. P < 0.05 was regarded as significant. Results: 75 partic-ipants' ages varied from 28 to 48 years old (35.5±5.1). The percentage of spermatozoa bear-ing X and Y was determined to be statistically significant (p-value < 0.05) and was found to be 48.92% and 51.18% of CEP X X 1 – nucish (CEP XX 1) [100] and CEP Y X 1 – nucish (CEP Y X 1) [100]. When compared to the rate of DNA fragmentation, it was discovered that infertile males had a greater frequency of sperm aneuploidy. Asthenospermia and OAT groups in sex chromosomal aneuploidy were significantly correlated (p<0.05). Conclusion: Sperm FISH and SCDA assay results showed increased sperm aneuploidy frequency, and DNA fragmentation index in infertile men compared with fertile men. There is a significant relationship observed between sperm aneuploidy and DNA fragmentation in OAT patients. When evaluating male variables and idiopathic infertility, the sperm FISH screening method can be used as a valuable diagnostic tool.Keywords: ale infertility, dfi (dna fragmentation assay) (scd-sperm chromatin dispersion).art (artificial reproductive technology), trisomy, aneuploidy, fish (fluorescence in-situ hybridization), oat (oligoasthoteratospermia)
Procedia PDF Downloads 5613288 The Investigation of Oil Price Shocks by Using a Dynamic Stochastic General Equilibrium: The Case of Iran
Authors: Bahram Fathi, Karim Alizadeh, Azam Mohammadbagheri
Abstract:
The aim of this paper is to investigate the role of oil price shocks in explaining business cycles in Iran using a dynamic stochastic general equilibrium approach. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. The model with two shocks produces different values for volatility, but these values have the same ranking as that of the actual data for most variables. In addition, the actual data are close to the ratio of standard deviations to the output obtained from the model with two shocks. The results indicate that productivity shocks are relatively more important to business cycles than the oil shocks. The model with only a productivity shock produces the most similar figures in term of volatility magnitude to that of the actual data. Next, we use the Impulse Response Functions (IRF) to evaluate the capability of the model. The IRF shows no effect of an oil shock on the capital stocks and on labor hours, which is a feature of the model. When the log-linearized system of equations is solved numerically, investment and labor hours were not found to be functions of the oil shock. This research recommends using different techniques to compare the model’s robustness. One method by which to do this is to have all decision variables as a function of the oil shock by inducing the stationary to the model differently. Another method is to impose a bond adjustment cost. This study intends to fill that gap. To achieve this objective, we derive a DSGE model that allows for the world oil price and productivity shocks. Second, we calibrate the model to the Iran economy. Next, we compare the moments from the theoretical model with both single and multiple shocks with that obtained from the actual data to see the extent to which business cycles in Iran can be explained by total oil revenue shock. Then, we use an impulse response function to evaluate the role of world oil price shocks. Finally, I present implications of the findings and interpretations in accordance with economic theory.Keywords: oil price, shocks, dynamic stochastic general equilibrium, Iran
Procedia PDF Downloads 43913287 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 30913286 Modeling the International Economic Relations Development: The Prospects for Regional and Global Economic Integration
Authors: M. G. Shilina
Abstract:
The interstate economic interaction phenomenon is complex. ‘Economic integration’, as one of its types, can be explored through the prism of international law, the theories of the world economy, politics and international relations. The most objective study of the phenomenon requires a comprehensive multifactoral approach. In new geopolitical realities, the problems of coexistence and possible interconnection of various mechanisms of interstate economic interaction are actively discussed. Currently, the Eurasian continent states support the direction to economic integration. At the same time, the existing international economic law fragmentation in Eurasia is seen as the important problem. The Eurasian space is characterized by a various types of interstate relations: international agreements (multilateral and bilateral), and a large number of cooperation formats (from discussion platforms to organizations aimed at deep integration). For their harmonization, it is necessary to have a clear vision to the phased international economic relations regulation options. In the conditions of rapid development of international economic relations, the modeling (including prognostic) can be optimally used as the main scientific method for presenting the phenomenon. On the basis of this method, it is possible to form the current situation vision and the best options for further action. In order to determine the most objective version of the integration development, the combination of several approaches were used. The normative legal approach- the descriptive method of legal modeling- was taken as the basis for the analysis. A set of legal methods was supplemented by the international relations science prognostic methods. The key elements of the model are the international economic organizations and states' associations existing in the Eurasian space (the Eurasian Economic Union (EAEU), the European Union (EU), the Shanghai Cooperation Organization (SCO), Chinese project ‘One belt-one road’ (OBOR), the Commonwealth of Independent States (CIS), BRICS, etc.). A general term for the elements of the model is proposed - the interstate interaction mechanisms (IIM). The aim of building a model of current and future Eurasian economic integration is to show optimal options for joint economic development of the states and IIMs. The long-term goal of this development is the new economic and political space, so-called the ‘Great Eurasian Community’. The process of achievement this long-term goal consists of successive steps. Modeling the integration architecture and dividing the interaction into stages led us to the following conclusion: the SCO is able to transform Eurasia into a single economic space. Gradual implementation of the complex phased model, in which the SCO+ plays a key role, will allow building an effective economic integration for all its participants, to create an economically strong community. The model can have practical value for politicians, lawyers, economists and other participants involved in the economic integration process. A clear, systematic structure can serve as a basis for further governmental action.Keywords: economic integration, The Eurasian Economic Union, The European Union, The Shanghai Cooperation Organization, The Silk Road Economic Belt
Procedia PDF Downloads 15213285 Determination of Crustal Structure and Moho Depth within the Jammu and Kashmir Region, Northwest Himalaya through Receiver Function
Authors: Shiv Jyoti Pandey, Shveta Puri, G. M. Bhat, Neha Raina
Abstract:
The Jammu and Kashmir (J&K) region of Northwest Himalaya has a long history of earthquake activity which falls within Seismic Zones IV and V. To know the crustal structure beneath this region, we utilized teleseismic receiver function method. This paper presents the results of the analyses of the teleseismic earthquake waves recorded by 10 seismic observatories installed in the vicinity of major thrusts and faults. The teleseismic waves at epicentral distance between 30o and 90o with moment magnitudes greater than or equal to 5.5 that contains large amount of information about the crust and upper mantle structure directly beneath a receiver has been used. The receiver function (RF) technique has been widely applied to investigate crustal structures using P-to-S converted (Ps) phases from velocity discontinuities. The arrival time of the Ps, PpPs and PpSs+ PsPs converted and reverberated phases from the Moho can be combined to constrain the mean crustal thickness and Vp/Vs ratio. Over 500 receiver functions from 10 broadband stations located in the Jammu & Kashmir region of Northwest Himalaya were analyzed. With the help of H-K stacking method, we determined the crustal thickness (H) and average crustal Vp/Vs ratio (K) in this region. We also used Neighbourhood algorithm technique to verify our results. The receiver function results for these stations show that the crustal thickness under Jammu & Kashmir ranges from 45.0 to 53.6 km with an average value of 50.01 km. The Vp/Vs ratio varies from 1.63 to 1.99 with an average value of 1.784 which corresponds to an average Poisson’s ratio of 0.266 with a range from 0.198 to 0.331. High Poisson’s ratios under some stations may be related to partial melting in the crust near the uppermost mantle. The crustal structure model developed from this study can be used to refine the velocity model used in the precise epicenter location in the region, thereby increasing the knowledge to understand current seismicity in the region.Keywords: H-K stacking, Poisson’s ratios, receiver function, teleseismic
Procedia PDF Downloads 24913284 Rock-Bed Thermocline Storage: A Numerical Analysis of Granular Bed Behavior and Interaction with Storage Tank
Authors: Nahia H. Sassine, Frédéric-Victor Donzé, Arnaud Bruch, Barthélemy Harthong
Abstract:
Thermal Energy Storage (TES) systems are central elements of various types of power plants operated using renewable energy sources. Packed bed TES can be considered as a cost–effective solution in concentrated solar power plants (CSP). Such a device is made up of a tank filled with a granular bed through which heat-transfer fluid circulates. However, in such devices, the tank might be subjected to catastrophic failure induced by a mechanical phenomenon known as thermal ratcheting. Thermal stresses are accumulated during cycles of loading and unloading until the failure happens. For instance, when rocks are used as storage material, the tank wall expands more than the solid medium during charge process, a gap is created between the rocks and tank walls and the filler material settles down to fill it. During discharge, the tank contracts against the bed, resulting in thermal stresses that may exceed the wall tank yield stress and generate plastic deformation. This phenomenon is repeated over the cycles and the tank will be slowly ratcheted outward until it fails. This paper aims at studying the evolution of tank wall stresses over granular bed thermal cycles, taking into account both thermal and mechanical loads, with a numerical model based on the discrete element method (DEM). Simulations were performed to study two different thermal configurations: (i) the tank is heated homogeneously along its height or (ii) with a vertical gradient of temperature. Then, the resulting loading stresses applied on the tank are compared as well the response of the internal granular material. Besides the study of the influence of different thermal configurations on the storage tank response, other parameters are varied, such as the internal angle of friction of the granular material, the dispersion of particles diameters as well as the tank’s dimensions. Then, their influences on the kinematics of the granular bed submitted to thermal cycles are highlighted.Keywords: discrete element method (DEM), thermal cycles, thermal energy storage, thermocline
Procedia PDF Downloads 40213283 The Influence of the Normative Gender Binary in Diversity Management: A Multi-Method Study on Gender Diversity of Diversity Management
Authors: Robin C. Ladwig
Abstract:
Diversity Management, as a substantial element of Human Resource Management, aims to secure the economic benefit that assumingly comes with a diverse workforce. Consequently, diversity managers focus on the protection of employees and securing equality measurements to assure organisational gender diversity. Gender diversity as one aspect of Diversity Management seems to adhere to gender binarism and cis-normativity. Workplaces are gendered spaces which are echoing the binary gender-normativity presented in Diversity Management, sold under the label of gender diversity. While the expectation of Diversity Management implies the inclusion of a multiplicity of marginalised groups, such as trans and gender diverse people, in current literature and practice, the reality is curated by gender binarism and cis-normativity. The qualitative multi-method research showed a lack of knowledge about trans and gender diverse matters within the profession of Diversity Management and Human Resources. The semi-structured interviews with trans and gender diverse individuals from various backgrounds and occupations in Australia exposed missing considerations of trans and gender diverse experiences in the inclusivity and gender equity of various workplaces. Even if practitioners consider trans and gender diverse matters under gender diversity, the practical execution is limited to gender binary structures and cis-normative actions as the photo-elicit questionnaire with diversity managers, human resource officers, and personnel management demonstrates. Diversity Management should approach a broader source of informed practice by extending their business focus to the knowledge of humanity studies. Humanity studies could include diversity, queer, or gender studies to increase the inclusivity of marginalised groups such as trans and gender diverse employees and people. Furthermore, the definition of gender diversity should be extended beyond the gender binary and cis-normative experience. People may lose trust in Diversity Management as a supportive ally of marginalised employees if the understanding of inclusivity is limited to a gender binary and cis-normativity value system that misrepresents the richness of gender diversity.Keywords: cis-normativity, diversity management, gender binarism, trans and gender diversity
Procedia PDF Downloads 20413282 Fault Prognostic and Prediction Based on the Importance Degree of Test Point
Authors: Junfeng Yan, Wenkui Hou
Abstract:
Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate
Procedia PDF Downloads 37913281 Linguistic Analysis of Holy Scriptures: A Comparative Study of Islamic Jurisprudence and the Western Hermeneutical Tradition
Authors: Sana Ammad
Abstract:
The tradition of linguistic analysis in Islam and Christianity has developed independently of each other in lieu of the social developments specific to their historical context. However, recently increasing number of Muslim academics educated in the West have tried to apply the Western tradition of linguistic interpretation to the Qur’anic text while completely disregarding the Islamic linguistic tradition used and developed by the traditional scholars over the centuries. The aim of the paper is to outline the linguistic tools and methods used by the traditional Islamic scholars for the purpose of interpretating the Holy Qur’an and shed light on how they contribute towards a better understanding of the text compared to their Western counterparts. This paper carries out a descriptive-comparative study of the linguistic tools developed and perfected by the traditional scholars in Islam for the purpose of textual analysis of the Qur’an as they have been described in the authentic works of Usul Al Fiqh (Jurisprudence) and the principles of textual analysis employed by the Western hermeneutical tradition for the study of the Bible. First, it briefly outlines the independent historical development of the two traditions emphasizing the final normative shape that they have taken. Then it draws a comparison of the two traditions highlighting the similarities and the differences existing between them. In the end, the paper demonstrates the level of academic excellence achieved by the traditional linguistic scholars in their efforts to develop appropriate tools of textual interpretation and how these tools are more suitable for interpreting the Qur’an compared to the Western principles. Since the aim of interpreters of both the traditions is to try and attain an objective understanding of the Scriptures, the emphasis of the paper shall be to highlight how well the Islamic method of linguistic interpretation contributes to an objective understanding of the Qur’anic text. The paper concludes with the following findings: The Western hermeneutical tradition of linguistic analysis developed within the Western historical context. However, the Islamic method of linguistic analysis is much more highly developed and complex and serves better the purpose of objective understanding of the Holy text.Keywords: Islamic jurisprudence, linguistic analysis, textual interpretation, western hermeneutics
Procedia PDF Downloads 33113280 Numerical Modelling of 3-D Fracture Propagation and Damage Evolution of an Isotropic Heterogeneous Rock with a Pre-Existing Surface Flaw under Uniaxial Compression
Authors: S. Mondal, L. M. Olsen-Kettle, L. Gross
Abstract:
Fracture propagation and damage evolution are extremely important for many industrial applications including mining industry, composite materials, earthquake simulations, hydraulic fracturing. The influence of pre-existing flaws and rock heterogeneity on the processes and mechanisms of rock fracture has important ramifications in many mining and reservoir engineering applications. We simulate the damage evolution and fracture propagation in an isotropic sandstone specimen containing a pre-existing 3-D surface flaw in different configurations under uniaxial compression. We apply a damage model based on the unified strength theory and solve the solid deformation and damage evolution equations using the Finite Element Method (FEM) with tetrahedron elements on unstructured meshes through the simulation software, eScript. Unstructured meshes provide higher geometrical flexibility and allow a more accurate way to model the varying flaw depth, angle, and length through locally adapted FEM meshes. The heterogeneity of rock is considered by initializing material properties using a Weibull distribution sampled over a cubic grid. In our model, we introduce a length scale related to the rock heterogeneity which is independent of the mesh size. We investigate the effect of parameters including the heterogeneity of the elastic moduli and geometry of the single flaw in the stress strain response. The generation of three typical surface cracking patterns, called wing cracks, anti-wing cracks and far-field cracks were identified, and these depend on the geometry of the pre-existing surface flaw. This model results help to advance our understanding of fracture and damage growth in heterogeneous rock with the aim to develop fracture simulators for different industry applications.Keywords: finite element method, heterogeneity, isotropic damage, uniaxial compression
Procedia PDF Downloads 21913279 Seismic Hazard Assessment of Tehran
Authors: Dorna Kargar, Mehrasa Masih
Abstract:
Due to its special geological and geographical conditions, Iran has always been exposed to various natural hazards. Earthquake is one of the natural hazards with random nature that can cause significant financial damages and casualties. This is a serious threat, especially in areas with active faults. Therefore, considering the population density in some parts of the country, locating and zoning high-risk areas are necessary and significant. In the present study, seismic hazard assessment via probabilistic and deterministic method for Tehran, the capital of Iran, which is located in Alborz-Azerbaijan province, has been done. The seismicity study covers a range of 200 km from the north of Tehran (X=35.74° and Y= 51.37° in LAT-LONG coordinate system) to identify the seismic sources and seismicity parameters of the study region. In order to identify the seismic sources, geological maps at the scale of 1: 250,000 are used. In this study, we used Kijko-Sellevoll's method (1992) to estimate seismicity parameters. The maximum likelihood estimation of earthquake hazard parameters (maximum regional magnitude Mmax, activity rate λ, and the Gutenberg-Richter parameter b) from incomplete data files is extended to the case of uncertain magnitude values. By the combination of seismicity and seismotectonic studies of the site, the acceleration with antiseptic probability may happen during the useful life of the structure is calculated with probabilistic and deterministic methods. Applying the results of performed seismicity and seismotectonic studies in the project and applying proper weights in used attenuation relationship, maximum horizontal and vertical acceleration for return periods of 50, 475, 950 and 2475 years are calculated. Horizontal peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.12g, 0.30g, 0.37g and 0.50, and Vertical peak ground acceleration on the seismic bedrock for 50, 475, 950 and 2475 return periods are 0.08g, 0.21g, 0.27g and 0.36g.Keywords: peak ground acceleration, probabilistic and deterministic, seismic hazard assessment, seismicity parameters
Procedia PDF Downloads 7113278 Effect of Print Orientation on the Mechanical Properties of Multi Jet Fusion Additively Manufactured Polyamide-12
Authors: Tyler Palma, Praveen Damasus, Michael Munther, Mehrdad Mohsenizadeh, Keivan Davami
Abstract:
The advancement of additive manufacturing, in both research and commercial realms, is highly dependent upon continuing innovations and creativity in materials and designs. Additive manufacturing shows great promise towards revolutionizing various industries, due largely to the fact that design data can be used to create complex products and components, on demand and from the raw materials, for the end user at the point of use. However, it will be critical that the material properties of additively-made parts for engineering purposes be fully understood. As it is a relatively new additive manufacturing method, the response of properties of Multi Jet Fusion (MJF) produced parts to different printing parameters has not been well studied. In this work, testing of mechanical and tribological properties MJF-printed Polyamide 12 parts was performed to determine whether printing orientation in this method results in significantly different part performances. Material properties were studied at macro- and nanoscales. Tensile tests, in combination with tribology tests including steady-state wear, were performed. Results showed a significant difference in resultant part characteristics based on whether they were printed in a vertical or horizontal orientation. Tensile performance of vertically and horizontally printed samples varied, both in ultimate strength and strain. Tribology tests showed that printing orientation has notable effects on the resulting mechanical and wear properties of tested surfaces, due largely to layer orientation and the presence of unfused fused powder grain inclusions. This research advances the understanding of how print orientation affects the mechanical properties of additively manufactured structures, and also how print orientation can be exploited in future engineering design.Keywords: additive manufacturing, indentation, nano mechanical characterization, print orientation
Procedia PDF Downloads 14113277 Improving Grade Control Turnaround Times with In-Pit Hyperspectral Assaying
Authors: Gary Pattemore, Michael Edgar, Andrew Job, Marina Auad, Kathryn Job
Abstract:
As critical commodities become more scarce, significant time and resources have been used to better understand complicated ore bodies and extract their full potential. These challenging ore bodies provide several pain points for geologists and engineers to overcome, poor handling of these issues flows downs stream to the processing plant affecting throughput rates and recovery. Many open cut mines utilise blast hole drilling to extract additional information to feed back into the modelling process. This method requires samples to be collected during or after blast hole drilling. Samples are then sent for assay with turnaround times varying from 1 to 12 days. This method is time consuming, costly, requires human exposure on the bench and collects elemental data only. To address this challenge, research has been undertaken to utilise hyperspectral imaging across a broad spectrum to scan samples, collars or take down hole measurements for minerals and moisture content and grade abundances. Automation of this process using unmanned vehicles and on-board processing reduces human in pit exposure to ensure ongoing safety. On-board processing allows data to be integrated into modelling workflows with immediacy. The preliminary results demonstrate numerous direct and indirect benefits from this new technology, including rapid and accurate grade estimates, moisture content and mineralogy. These benefits allow for faster geo modelling updates, better informed mine scheduling and improved downstream blending and processing practices. The paper presents recommendations for implementation of the technology in open cut mining environments.Keywords: grade control, hyperspectral scanning, artificial intelligence, autonomous mining, machine learning
Procedia PDF Downloads 11313276 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 16413275 Synthesis of 5-Substituted 1H-Tetrazoles in Deep Eutectic Solvent
Authors: Swapnil A. Padvi, Dipak S. Dalal
Abstract:
The chemistry of tetrazoles has been grown tremendously in the past few years because tetrazoles are important and useful class of heterocyclic compounds which have a widespread application such as anticancer, antimicrobial, analgesics, antibacterial, antifungal, antihypertensive, and anti-allergic drugs in medicinal chemistry. Furthermore, tetrazoles have application in material sciences as explosives, rocket propellants, and in information recording systems. In addition to this, they have a wide range of application in coordination chemistry as a ligand. Deep eutectic solvents (DES) have emerged over the current decade as a novel class of green reaction media and applied in various fields of sciences because of their unique physical and chemical properties similar to the ionic liquids such as low vapor pressure, non-volatility, high thermal stability and recyclability. In addition, the reactants of DES are cheaply available, low-toxic, and biodegradable, which makes them predominantly required for large-scale applications effectively in industrial production. Herein we report the [2+3] cycloaddition reaction of organic nitriles with sodium azide affords the corresponding 5-substituted 1H-tetrazoles in six different types of choline chloride based deep eutectic solvents under mild reaction condition. Choline chloride: ZnCl2 (1:2) showed the best results for the synthesis of 5-substituted 1 H-tetrazoles. This method reduces the disadvantages such as: the use of toxic metals and expensive reagents, drastic reaction conditions and the presence of dangerous hydrazoic acid. The approach provides environment-friendly, short reaction times, good to excellent yields; safe process and simple workup make this method an attractive and useful contribution to present green organic synthesis of 5-substituted-1H-tetrazoles. All synthesized compounds were characterized by IR, 1H NMR, 13C NMR and Mass spectroscopy. DES can be recovered and reused three times with very little loss in activity.Keywords: click chemistry, choline chloride, green chemistry, deep eutectic solvent, tetrazoles
Procedia PDF Downloads 23213274 Proof of Concept of Video Laryngoscopy Intubation: Potential Utility in the Pre-Hospital Environment by Emergency Medical Technicians
Authors: A. Al Hajeri, M. E. Minton, B. Haskins, F. H. Cummins
Abstract:
The pre-hospital endotracheal intubation is fraught with difficulties; one solution offered has been video laryngoscopy (VL) which permits better visualization of the glottis than the standard method of direct laryngoscopy (DL). This method has resulted in a higher first attempt success rate and fewer failed intubations. However, VL has mainly been evaluated by experienced providers (experienced anesthetists), and as such the utility of this device for those whom infrequently intubate has not been thoroughly assessed. We sought to evaluate this equipment to determine whether in the hands of novice providers this equipment could prove an effective airway management adjunct. DL and two VL methods (C-Mac with distal screen/C-Mac with attached screen) were evaluated by simulating practice on a Laerdal airway management trainer manikin. Twenty Emergency Medical Technicians (basics) were recruited as novice practitioners. This group was used to eliminate bias, as these clinicians had no pre-hospital experience of intubation (although they did have basic airway skills). The following areas were assessed: Time taken to intubate, number of attempts required to successfully intubate, ease of use of equipment VL (attached screen) took on average longer for novice clinicians to successfully intubate and had a lower success rate and reported higher rating of difficulty compared to DL. However, VL (with distal screen) and DL were comparable on intubation times, success rate, gastric inflation rate and rating of difficulty by the user. This study highlights the routine use of VL by inexperienced clinicians would be of no added benefit over DL. Further studies are required to determine whether Emergency Medical Technicians (Paramedics) would benefit from this airway adjunct, and ascertain whether after initial mastery of VL (with a distal screen), lower intubation times and difficulty rating may be achievable.Keywords: direct laryngoscopy, endotracheal intubation, pre-hospital, video laryngoscopy
Procedia PDF Downloads 41013273 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 16813272 Development of a Robust Protein Classifier to Predict EMT Status of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma (CESC) Tumors
Authors: ZhenlinJu, Christopher P. Vellano, RehanAkbani, Yiling Lu, Gordon B. Mills
Abstract:
The epithelial–mesenchymal transition (EMT) is a process by which epithelial cells acquire mesenchymal characteristics, such as profound disruption of cell-cell junctions, loss of apical-basolateral polarity, and extensive reorganization of the actin cytoskeleton to induce cell motility and invasion. A hallmark of EMT is its capacity to promote metastasis, which is due in part to activation of several transcription factors and subsequent downregulation of E-cadherin. Unfortunately, current approaches have yet to uncover robust protein marker sets that can classify tumors as possessing strong EMT signatures. In this study, we utilize reverse phase protein array (RPPA) data and consensus clustering methods to successfully classify a subset of cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) tumors into an EMT protein signaling group (EMT group). The overall survival (OS) of patients in the EMT group is significantly worse than those in the other Hormone and PI3K/AKT signaling groups. In addition to a shrinkage and selection method for linear regression (LASSO), we applied training/test set and Monte Carlo resampling approaches to identify a set of protein markers that predicts the EMT status of CESC tumors. We fit a logistic model to these protein markers and developed a classifier, which was fixed in the training set and validated in the testing set. The classifier robustly predicted the EMT status of the testing set with an area under the curve (AUC) of 0.975 by Receiver Operating Characteristic (ROC) analysis. This method not only identifies a core set of proteins underlying an EMT signature in cervical cancer patients, but also provides a tool to examine protein predictors that drive molecular subtypes in other diseases.Keywords: consensus clustering, TCGA CESC, Silhouette, Monte Carlo LASSO
Procedia PDF Downloads 47013271 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 29713270 Consistent Testing for an Implication of Supermodular Dominance with an Application to Verifying the Effect of Geographic Knowledge Spillover
Authors: Chung Danbi, Linton Oliver, Whang Yoon-Jae
Abstract:
Supermodularity, or complementarity, is a popular concept in economics which can characterize many objective functions such as utility, social welfare, and production functions. Further, supermodular dominance captures a preference for greater interdependence among inputs of those functions, and it can be applied to examine which input set would produce higher expected utility, social welfare, or production. Therefore, we propose and justify a consistent testing for a useful implication of supermodular dominance. We also conduct Monte Carlo simulations to explore the finite sample performance of our test, with critical values obtained from the recentered bootstrap method, with and without the selective recentering, and the subsampling method. Under various parameter settings, we confirmed that our test has reasonably good size and power performance. Finally, we apply our test to compare the geographic and distant knowledge spillover in terms of their effects on social welfare using the National Bureau of Economic Research (NBER) patent data. We expect localized citing to supermodularly dominate distant citing if the geographic knowledge spillover engenders greater social welfare than distant knowledge spillover. Taking subgroups based on firm and patent characteristics, we found that there is industry-wise and patent subclass-wise difference in the pattern of supermodular dominance between localized and distant citing. We also compare the results from analyzing different time periods to see if the development of Internet and communication technology has changed the pattern of the dominance. In addition, to appropriately deal with the sparse nature of the data, we apply high-dimensional methods to efficiently select relevant data.Keywords: supermodularity, supermodular dominance, stochastic dominance, Monte Carlo simulation, bootstrap, subsampling
Procedia PDF Downloads 130