Search results for: mode choice models
5915 Numerical Modeling of Turbulent Natural Convection in a Square Cavity
Authors: Mohammadreza Sedighi, Mohammad Said Saidi, Hesamoddin Salarian
Abstract:
A numerical study has been performed to investigate the effect of using different turbulent models on natural convection flow field and temperature distributions in partially heated square cavity compare to benchmark. The temperature of the right vertical wall is lower than that of heater while other walls are insulated. The commercial CFD codes are used to model. Standard k-w model provided good agreement with the experimental data.Keywords: Buoyancy, Cavity, CFD, Heat Transfer, Natural Convection, Turbulence
Procedia PDF Downloads 3415914 A Multi-Objective Optimization Tool for Dual-Mode Operating Active Magnetic Regenerator Model
Authors: Anna Ouskova Leonteva, Michel Risser, Anne Jeannin-Girardon, Pierre Parrend, Pierre Collet
Abstract:
This paper proposes an efficient optimization tool for an active magnetic regenerator (AMR) model, operating in two modes: magnetic refrigeration system (MRS) and thermo-magnetic generator (TMG). The aim of this optimizer is to improve the design of the AMR by applying a multi-physics multi-scales numerical model as a core of evaluation functions to achieve industrial requirements for refrigeration and energy conservation systems. Based on the multi-objective non-dominated sorting genetic algorithm 3 (NSGA3), it maximizes four different objectives: efficiency and power density for MRS and TMG. The main contribution of this work is in the simultaneously application of a CPU-parallel NSGA3 version to the AMR model in both modes for studying impact of control and design parameters on the performance. The parametric study of the optimization results are presented. The main conclusion is that the common (for TMG and MRS modes) optimal parameters can be found by the proposed tool.Keywords: ecological refrigeration systems, active magnetic regenerator, thermo-magnetic generator, multi-objective evolutionary optimization, industrial optimization problem, real-world application
Procedia PDF Downloads 1145913 The Visualization of the Way of Creating a Service: Slavic Liturgical Books. Between Text and Music
Authors: Victoria Legkikh
Abstract:
To create a new Orthodox service of Jerusalem rite and to make it possible for a performance, one had to use several types of books. These are menaions and triodion, cleargy service book, stichirarion and typikon. These books keep a part of the information about the service, which a medieval copyist had to put together like a puzzle. But an abundance of necessary books and their variety created a lot of problems in copying services. The main problem was the difference of text in notated and not notated manuscripts (they were corrected at a different time) and lack of information in typikon, which provided only a type of hymns and their mode. After all, a copyist could have both corrected and not corrected manuscripts which also provided a different type of service. It brings us to the situation when we hardly have a couple of manuscripts containing the same service, and it is difficult to understand which changes were made voluntarily and which ones were provided by different types of available manuscripts. A recent paper proposes an analysis of every type of liturgical book and a way of using them in copying and correcting a service so we can divide voluntary changes and changes due to various types of books. The paper also proposes an index showing the “material” life of hymns in different types of manuscripts and the changes of its version and place in the same type of manuscript. This type of index can help in reconstructing the way of creation/copying service and can be useful for publication of the services providing necessary information of every hymn in every used manuscript.Keywords: orthodox church music, creation, manuscripts, liturgical books
Procedia PDF Downloads 1735912 Teaching Tools for Web Processing Services
Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr
Abstract:
Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.Keywords: deegree, interpolation, IDW, web processing service (WPS)
Procedia PDF Downloads 3555911 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children
Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco
Abstract:
Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.Keywords: evolutionary computation, feature selection, classification, clustering
Procedia PDF Downloads 3715910 Statistical Modeling and by Artificial Neural Networks of Suspended Sediment Mina River Watershed at Wadi El-Abtal Gauging Station (Northern Algeria)
Authors: Redhouane Ghernaout, Amira Fredj, Boualem Remini
Abstract:
Suspended sediment transport is a serious problem worldwide, but it is much more worrying in certain regions of the world, as is the case in the Maghreb and more particularly in Algeria. It continues to take disturbing proportions in Northern Algeria due to the variability of rains in time and in space and constant deterioration of vegetation. Its prediction is essential in order to identify its intensity and define the necessary actions for its reduction. The purpose of this study is to analyze the concentration data of suspended sediment measured at Wadi El-Abtal Hydrometric Station. It also aims to find and highlight regressive power relationships, which can explain the suspended solid flow by the measured liquid flow. The study strives to find models of artificial neural networks linking the flow, month and precipitation parameters with solid flow. The obtained results show that the power function of the solid transport rating curve and the models of artificial neural networks are appropriate methods for analysing and estimating suspended sediment transport in Wadi Mina at Wadi El-Abtal Hydrometric Station. They made it possible to identify in a fairly conclusive manner the model of neural networks with four input parameters: the liquid flow Q, the month and the daily precipitation measured at the representative stations (Frenda 013002 and Ain El-Hadid 013004 ) of the watershed. The model thus obtained makes it possible to estimate the daily solid flows (interpolate and extrapolate) even beyond the period of observation of solid flows (1985/86 to 1999/00), given the availability of the average daily liquid flows and daily precipitation since 1953/1954.Keywords: suspended sediment, concentration, regression, liquid flow, solid flow, artificial neural network, modeling, mina, algeria
Procedia PDF Downloads 1035909 Characterization and Geochemical Modeling of Cu and Zn Sorption Using Mixed Mineral Systems Injected with Iron Sulfide under Sulfidic-Anoxic Conditions I: Case Study of Cwmheidol Mine Waste Water, Wales, United Kingdom
Authors: D. E. Egirani, J. E. Andrews, A. R. Baker
Abstract:
This study investigates sorption of Cu and Zn contained in natural mine wastewater, using mixed mineral systems in sulfidic-anoxic condition. The mine wastewater was obtained from disused mine workings at Cwmheidol in Wales, United Kingdom. These contaminants flow into water courses. These water courses include River Rheidol. In this River fishing activities exist. In an attempt to reduce Cu-Zn levels of fish intake in the watercourses, single mineral systems and 1:1 mixed mineral systems of clay and goethite were tested with the mine waste water for copper and zinc removal at variable pH. Modelling of hydroxyl complexes was carried out using phreeqc method. Reactions using batch mode technique was conducted at room temperature. There was significant differences in the behaviour of copper and zinc removal using mixed mineral systems when compared to single mineral systems. All mixed mineral systems sorb more Cu than Zn when tested with mine wastewater.Keywords: Cu- Zn, hydroxyl complexes, kinetics, mixed mineral systems, reactivity
Procedia PDF Downloads 4995908 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change
Authors: Ermias A. Tegegn, Million Meshesha
Abstract:
Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model
Procedia PDF Downloads 1425907 The Impact of COVID-19 on Antibiotic Prescribing in Primary Care in England: Evaluation and Risk Prediction of the Appropriateness of Type and Repeat Prescribing
Authors: Xiaomin Zhong, Alexander Pate, Ya-Ting Yang, Ali Fahmi, Darren M. Ashcroft, Ben Goldacre, Brian Mackenna, Amir Mehrkar, Sebastian C. J. Bacon, Jon Massey, Louis Fisher, Peter Inglesby, Kieran Hand, Tjeerd van Staa, Victoria Palin
Abstract:
Background: This study aimed to predict risks of potentially inappropriate antibiotic type and repeat prescribing and assess changes during COVID-19. Methods: With the approval of NHS England, we used the OpenSAFELY platform to access the TPP SystmOne electronic health record (EHR) system and selected patients prescribed antibiotics from 2019 to 2021. Multinomial logistic regression models predicted the patient’s probability of receiving an inappropriate antibiotic type or repeating the antibiotic course for each common infection. Findings: The population included 9.1 million patients with 29.2 million antibiotic prescriptions. 29.1% of prescriptions were identified as repeat prescribing. Those with same-day incident infection coded in the EHR had considerably lower rates of repeat prescribing (18.0%), and 8.6% had a potentially inappropriate type. No major changes in the rates of repeat antibiotic prescribing during COVID-19 were found. In the ten risk prediction models, good levels of calibration and moderate levels of discrimination were found. Important predictors included age, prior antibiotic prescribing, and region. Patients varied in their predicted risks. For sore throat, the range from 2.5 to 97.5th percentile was 2.7 to 23.5% (inappropriate type) and 6.0 to 27.2% (repeat prescription). For otitis externa, these numbers were 25.9 to 63.9% and 8.5 to 37.1%, respectively. Interpretation: Our study found no evidence of changes in the level of inappropriate or repeat antibiotic prescribing after the start of COVID-19. Repeat antibiotic prescribing was frequent and varied according to regional and patient characteristics. There is a need for treatment guidelines to be developed around antibiotic failure and clinicians provided with individualised patient information.Keywords: antibiotics, infection, COVID-19 pandemic, antibiotic stewardship, primary care
Procedia PDF Downloads 1205906 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 1765905 Gender Differences in Adolescent Avatars: Gender Consistency and Masculinity-Femininity of Nicknames and Characters
Authors: Monika Paleczna, Małgorzata Holda
Abstract:
Choosing an avatar's gender in a computer game is one of the key elements in the process of creating an online identity. The selection of a male or female avatar can define the entirety of subsequent decisions regarding both appearance and behavior. However, when the most popular games available for the Nintendo console in 1998 were analyzed, it turned out that 41% of computer games did not have female characters. Nowadays, players create their avatars based mainly on binary gender classification, with male and female characters to choose from. The main aim of the poster is to explore gender differences in adolescent avatars. 130 adolescents aged 15-17 participated in the study. They created their avatars and then played a computer game. The creation of the avatar was based on the choice of gender, then physical and mental characteristics. Data on gender consistency (consistency between participant’s sex and gender selected for the avatar) and masculinity-femininity of avatar nicknames and appearance will be presented. The masculinity-femininity of avatar nicknames and appearance was assessed by expert raters on a very masculine to very feminine scale. Additionally, data on the relationships of the perceived levels of masculinity-femininity with hostility-friendliness and the intelligence of avatars will be shown. The dimensions of hostility-friendliness and intelligence were also assessed by expert raters on scales ranging from very hostile to very friendly and from very low intelligence to very high intelligence.Keywords: gender, avatar, adolescence, computer games
Procedia PDF Downloads 2155904 Age and Gender Differences in Positive Solitude Preferences
Authors: Sharon Ost Mor, Yuval Palgi, Ddikla Segel-Karpas
Abstract:
Solitude and positive solitude (PS) are used in literature interchangeably, yet they have different attributes and effects. While solitude might have devastating outcomes such as depression or health deterioration, PS has beneficial outcomes. Yet, both solitude and PS have no clear unanimous definition. Most researches focus on solitude, while the phenomenon of PS is somewhat neglected. Most research deals with young people and adults, while the current research is interested in PS concepts especially in old age. A qualitative study, with 124 participants was performed in order to understand the essence of PS in different age groups. The findings revealed seven categories related to PS, including: Quietness, religious and spiritual experience, escapism, experience in nature or abroad, controlling stress or thoughts, facilitation achievements and recreation-hobbies-routines. Moreover, three meta-themes emerged: PS is a matter of choice, it is meaningful and enjoyable. One stand alone category was found: PS preconditions. Differences between younger and older adults were found in several categories and in PS preconditions, while the meta-themes were equally mentioned by all participants. Based on the participant's reflections and descriptions a new PS paradigm was built and will be presented as well as a new definition of PS. PS was renamed as 'Soulitude' in order to emphasize its' positive nature. Conclusions: PS serves well most people, yet it has different attributes in different ages. By giving PS a unanimous definition and by understanding its' contribution for the elderly, PS might be addressed as a legitimate, stand alone phenomenon. The paradigm might serve theory as well as clinicians for further PS research.Keywords: old-old, positive solitude, solitude, soulitude
Procedia PDF Downloads 1335903 Repellent Activity of Nanoemulsion Essential Oil of Eucalyptus Globulus Labill on Ephestia kuehniella (Lepidoptera: Pyralidae)
Authors: Lena Emamjomeh, Sohrab Imani
Abstract:
Nowadays, the use of encapsulation technology of pesticides causes an increase in the efficiency and controlled release of these substances. Controlled release by nanoencapsulated formulations allows the essential oil to be used more effectively over a given time interval, suitability to the mode of application and minimization of environmental damage. The essential oil from Eucalyptus globulus exhibited an average yield of 1.19% and presented 1,8-cineol (59.08%) as the major component. Nanoemulsion essential oil was carried out by the method of gum - maltodextrin using homogenization and morphology and size were determined by TEM. Several concentrations were prepared, and then third instar larvae of E.kuehniella were introduced into each treatment. Then, repellent activity was determined after 1, 3 and 24 h from commencement. This study reveals that at a concentration of 1.5 ppm, the nanoemulsion of E. globulus essential oil on the flour disc was shown here to possess more repellent activity (85%) than E.kuehniella compared to natural essential oil (5%) before formulation after 24h. The repellent activity varied with application method concentrations and exposure time. The results showed higher repellent rates in nanoemulsion than in essential oil due to controlled-release formulations allowing smaller quantities of essential oil to be used more effectively over a given time interval. Findings led to the conclusion that encapsulated technology of essential oils can enhance their control release and persistence under controlled conditions.Keywords: nanoemulsion, eucalyptus globulus, ephestia kuehniella, TEM
Procedia PDF Downloads 505902 Modelling Hydrological Time Series Using Wakeby Distribution
Authors: Ilaria Lucrezia Amerise
Abstract:
The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution
Procedia PDF Downloads 1395901 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data
Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill
Abstract:
Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function
Procedia PDF Downloads 2795900 Interval Bilevel Linear Fractional Programming
Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi
Abstract:
The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients
Procedia PDF Downloads 4475899 Printing Imperfections: Development of Buckling Patterns to Improve Strength of 3D Printed Steel Plated Elements
Authors: Ben Chater, Jingbang Pan, Mark Evernden, Jie Wang
Abstract:
Traditional structural steel manufacturing routes normally produce prismatic members with flat plate elements. In these members, plate instability in the lowest buckling mode often dominates failure. It is proposed in the current study to use a new technology of metal 3D printing to print steel-plated elements with predefined imperfection patterns that can lead to higher modes of failure with increased buckling resistances. To this end, a numerical modeling program is carried out to explore various combinations of predefined buckling waves with different amplitudes in stainless steel square hollow section stub columns. Their stiffness, strength, and material consumption against the traditional structural steel members with the same nominal dimensions are assessed. It is found that depending on the slenderness of the plate elements; it is possible for an ‘imperfect’ steel member to achieve up to a 30% increase in strength with just a 3% increase in the material consumption. The obtained results shed some light on the significant potential of the new metal 3D printing technology in achieving unprecedented material efficiency and economical design in the future steel construction industry.Keywords: 3D printing, additive manufacturing, buckling resistance, steel plate buckling, structural optimisation
Procedia PDF Downloads 1445898 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear
Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho
Abstract:
The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.Keywords: prestressed hollow core slabs, shear, strut, tie models
Procedia PDF Downloads 3335897 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 2495896 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 1955895 Of Rites of Narration and Representation of Orient and Occident in Thomas Heywood's Fair Maid of the West
Authors: Tarik Bouguerba
Abstract:
Thomas Heywood was an outstanding, prolific playwright of the period, writing both in prose and verse. Unlike Shakespeare in particular, Heywood could be considered as a playwright who was well informed about Morocco and wrote in greater detail about a possible dialogue among cultures. As it is a historical platform for power relations, The Fair Maid of the West recalled the heroism and excitement of English counterattacks against Spain in the Post-Armada period. This paper therefore pins down the acts of narration and representation of Morocco and Moroccans and examines how the Occident has contributed to the production of the Orient and finally attests to the metamorphosis the plot undergoes in Part I and Part II. As an adventure play, The Fair Maid of the West teaches about, informs of and confirms the existing patterns of virtue in European voyagers and at the same time it asserts how honor and chastity are European par excellence whereas villainy and wickedness are Oriental assets. Once taken captive, these virtues and traits are put into task as the plot disentangles. This paper also examines how the play in both parts generates a whole history of stereotypes about Morocco and unexpectedly subverts this stereotype; such a biased mode of narration of the Orient the playwright took up at first was played down at a later phase in the narrative.Keywords: Heywood, Occident, Orientalism, Stereotype, Virtue
Procedia PDF Downloads 1395894 Fires in Historic Buildings: Assessment of Evacuation of People by Computational Simulation
Authors: Ivana R. Moser, Joao C. Souza
Abstract:
Building fires are random phenomena that can be extremely violent, and safe evacuation of people is the most guaranteed tactic in saving lives. The correct evacuation of buildings, and other spaces occupied by people, means leaving the place in a short time and by the appropriate way. It depends on the perception of spaces by the individual, the architectural layout and the presence of appropriate routing systems. As historical buildings were constructed in other times, when, as in general, the current security requirements were not available yet, it is necessary to adapt these spaces to make them safe. Computer models of evacuation simulation are widely used tools for assessing the safety of people in a building or agglomeration sites and these are associated with the analysis of human behaviour, makes the results of emergency evacuation more correct and conclusive. The objective of this research is the performance evaluation of historical interest buildings, regarding the safe evacuation of people, through computer simulation, using PTV Viswalk software. The buildings objects of study are the Colégio Catarinense, centennial building, located in the city of Florianópolis, Santa Catarina / Brazil. The software used uses the variables of human behaviour, such as: avoid collision with other pedestrians and avoid obstacles. Scenarios were run on the three-dimensional models and the contribution to safety in risk situations was verified as an alternative measure, especially in the impossibility of applying those measures foreseen by the current fire safety codes in Brazil. The simulations verified the evacuation time in situations of normality and emergency situations, as well as indicate the bottlenecks and critical points of the studied buildings, to seek solutions to prevent and correct these undesirable events. It is understood that adopting an advanced computational performance-based approach promotes greater knowledge of the building and how people behave in these specific environments, in emergency situations.Keywords: computer simulation, escape routes, fire safety, historic buildings, human behavior
Procedia PDF Downloads 1875893 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 205892 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model
Authors: Danjuma Bawa
Abstract:
This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics
Procedia PDF Downloads 1475891 Flood Control Structures in the River Göta Älv to Protect Gothenburg City (Sweden) during the 21st Century: Preliminary Evaluation
Authors: M. Irannezhad, E. H. N. Gashti, U. Moback, B. Kløve
Abstract:
Climate change because of increases in concentration level of greenhouse gases emissions to the atmosphere will result in mean sea level rise about +1 m by 2100. To prevent coastal floods resulted from the sea level rising, different flood control structures have been built, e.g. the Thames barrier on the Thames River in London (UK), with acceptable protection levels at least so far. Gothenburg located on the southwest coast of Sweden, with the River Göta älv running through it, is one of vulnerable cities to the accelerated rises in mean sea level. Developing a water level model by MATLAB, we evaluated using a sea barrage in the Göta älv River as the flood control structure for protecting the Gothenburg city during this century. Considering three operational scenarios for two barriers in upstream and downstream, the highest sea level was estimated to + 2.95 m above the current mean sea level by 2100. To verify flood protection against such high sea levels, both barriers have to be closed. To prevent high water level in the River Göta älv reservoir, the barriers would be open when the sea level is low. The suggested flood control structures would successfully protect the city from flooding events during this century.Keywords: climate change, flood control structures, gothenburg, sea level rising, water level mode
Procedia PDF Downloads 3565890 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 1165889 Mechanical Response of Aluminum Foam Under Biaxial Combined Quasi-Static Compression-Torsional Loads
Authors: Solomon Huluka, Akrum Abdul-Latif, Rachid Baleh
Abstract:
Metal foams have been developed intensively as a new class of materials for the last two decades due to their unique structural and multifunctional properties. The aim of this experimental work was to characterize the effect of biaxial loading complexity (combined compression-torsion) on the plastic response of highly uniform architecture open-cell aluminum foams of spherical porous with a density of 80%. For foam manufacturing, the Kelvin cells model was used to generate the generally spherical shape with a cell diameter of 11 mm. A patented rig called ACTP (Absorption par Compression-Torsion Plastique), was used to investigate the foam response under quasi-static complex loading paths having different torsional components (i.e. 0°, 45° and 60°). The key mechanical responses to be examined are yield stress, stress plateau, and energy absorption capacity. The collapse mode was also investigated. It was concluded that the higher the loading complexity, the greater the yield strength and the greater energy absorption capacity of the foam. Experimentally, it was also noticed that there were large softening effects that occurred after the first pick stress for both biaxial-45° and biaxial-60° loading.Keywords: aluminum foam, loading complexity, characterization, biaxial loading
Procedia PDF Downloads 1425888 Determinants of Market Entry Modes Used by Universities to Expand Internationally
Authors: Ali Bhayani
Abstract:
The article analyses determinants of the market entry modes used by corporate firms to expand internationally and explore whether higher education institutions uses the same determinants to decide on mode adopted to enter the market. Determinants like transaction costs, location advantage, idiosyncratic capabilities, isomorphic pressure to mimic, psychic distance, uncertainty, risks, the control over academic process, previous internationalisation experience and entry to homogenous markets are considered with regards to universities. A sample consisting of 40+ branch campuses from United Arab Emirates (UAE), host to highest number of branch campuses, is selected to study the determinants of the entry modes adopted. The aim of this article is not to prescribe or offer a solution for the best-available model of market entry that can be adopted by universities but rather to act as a trigger for a critical check up on universities planning to internationalize their offering. Determinants like idiosyncratic capabilities, isomorphic pressure and control over the academic process were found to be most prevalent. However, determinants like transaction cost efficiency, internationalisation experience, psychic distance, uncertainty and risks are not significant factors.Keywords: higher education, UAE, internationalisation, market entry, international branch campuses
Procedia PDF Downloads 3495887 Numerical Simulation of Punching Shear of Flat Plates with Low Reinforcement
Authors: Fatema-Tuz-Zahura, Raquib Ahsan
Abstract:
Punching shear failure is usually the governing failure mode of flat plate structures. Punching failure is brittle in nature which induces more vulnerability to this type of structure. In the present study, a 3D finite element model of a flat plate with low reinforcement ratio and without any transverse reinforcement has been developed. Punching shear stress and the deflection data were obtained on the surface of the flat plate as well as through the thickness of the model from numerical simulations. The obtained data were compared with the experimental results. Variation of punching stress with respect to deflection as obtained from numerical results is found to be in good agreement with the experimental results; the range of variation of punching stress is within 5%. The numerical simulation shows an early and gradual onset of nonlinearity, whereas the same is late and abrupt as observed in the experimental results. The range of variation of punching stress for different slab thicknesses between experimental and numerical results is less than 15%. The developed numerical model is useful to complement available punching test series performed in the past. The results obtained from the numerical model will be helpful for designing retrofitting schemes of flat plates.Keywords: flat plate, finite element model, punching shear, reinforcement ratio
Procedia PDF Downloads 2575886 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots
Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar
Abstract:
One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.Keywords: agricultural mobile robot, charging methods, battery replacement method, wireless charging method
Procedia PDF Downloads 149