Search results for: median models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7048

Search results for: median models

4558 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur

Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh

Abstract:

Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.

Keywords: hanging, channelling, blast furnace, coke

Procedia PDF Downloads 180
4557 Fires in Historic Buildings: Assessment of Evacuation of People by Computational Simulation

Authors: Ivana R. Moser, Joao C. Souza

Abstract:

Building fires are random phenomena that can be extremely violent, and safe evacuation of people is the most guaranteed tactic in saving lives. The correct evacuation of buildings, and other spaces occupied by people, means leaving the place in a short time and by the appropriate way. It depends on the perception of spaces by the individual, the architectural layout and the presence of appropriate routing systems. As historical buildings were constructed in other times, when, as in general, the current security requirements were not available yet, it is necessary to adapt these spaces to make them safe. Computer models of evacuation simulation are widely used tools for assessing the safety of people in a building or agglomeration sites and these are associated with the analysis of human behaviour, makes the results of emergency evacuation more correct and conclusive. The objective of this research is the performance evaluation of historical interest buildings, regarding the safe evacuation of people, through computer simulation, using PTV Viswalk software. The buildings objects of study are the Colégio Catarinense, centennial building, located in the city of Florianópolis, Santa Catarina / Brazil. The software used uses the variables of human behaviour, such as: avoid collision with other pedestrians and avoid obstacles. Scenarios were run on the three-dimensional models and the contribution to safety in risk situations was verified as an alternative measure, especially in the impossibility of applying those measures foreseen by the current fire safety codes in Brazil. The simulations verified the evacuation time in situations of normality and emergency situations, as well as indicate the bottlenecks and critical points of the studied buildings, to seek solutions to prevent and correct these undesirable events. It is understood that adopting an advanced computational performance-based approach promotes greater knowledge of the building and how people behave in these specific environments, in emergency situations.

Keywords: computer simulation, escape routes, fire safety, historic buildings, human behavior

Procedia PDF Downloads 178
4556 Hyperthyroidism in a Private Medical Services Center, Addis Ababa: A 5-Year Experience

Authors: Ersumo Tessema, Bogale Girmaye Tamrat, Mohammed Burka

Abstract:

Background: Hyperthyroidism is a common thyroid disorder especially in women and characterized by increased thyroid hormone synthesis and secretion. The disorder manifests predominantly as Graves’ disease in iodine-sufficient areas and has increasing prevalence in iodine-deficient countries in patients with nodular thyroid disease and following iodine fortification. In Ethiopia, the magnitude of the disorder is unknown and, in Africa, due to scarcity of resources, its management remains suboptimal. Objective: The aim of this study was to analyze the pattern and management of patients with hyperthyroidism at the United Vision Medical Services Center, Addis Ababa between August 30, 2013, and February 1, 2018. Patients and methods: The study was a retrospective analysis of medical records of all patients with hyperthyroidism at the United Vision Private Medical Services Center, Addis Ababa. A questionnaire was filled out; the collected data entered into a computer and statistically analyzed using the SPSS package. The results were tabulated and discussed with literature review. Results: A total of 589 patients were included in this study. The median age was 40 years, and the male to female ratio was 1.0:7.9. Most patients (93%) presented with goiter and the associated features of toxic goiter except weight loss, sweating and tachycardia were uncommon. Majority of patients presented more than two years after the onset of their presenting symptoms. The most common physical finding (91%), as well as diagnosis, was toxic nodular goiter. The most frequent (83%) derangement in the thyroid function tests was a low thyroid-stimulating hormone, and the most commonly (94%) used antithyroid drug was a propylthiouracil. The most common (96%) surgical procedure in 213 patients was a near-total thyroidectomy with a postoperative course without incident in 92% of all the patients. Conclusion: The incidence and prevalence of hyperthyroidism are apparently on the increase in Addis Ababa, which may be related to the existing severe iodine-deficiency and or the salt iodation program (iodine-induced hyperthyroidism). Hyperthyroidism predominantly affects women and, in surgical services, toxic nodular goiter is more common than diffuse goiter, and the treatment of choice in experienced hands is a near-total thyroidectomy.

Keywords: Ethiopia, grave’s disease, hyperthyroidism, toxic nodular goiter

Procedia PDF Downloads 163
4555 Planning for Location and Distribution of Regional Facilities Using Central Place Theory and Location-Allocation Model

Authors: Danjuma Bawa

Abstract:

This paper aimed at exploring the capabilities of Location-Allocation model in complementing the strides of the existing physical planning models in the location and distribution of facilities for regional consumption. The paper was designed to provide a blueprint to the Nigerian government and other donor agencies especially the Fertilizer Distribution Initiative (FDI) by the federal government for the revitalization of the terrorism ravaged regions. Theoretical underpinnings of central place theory related to spatial distribution, interrelationships, and threshold prerequisites were reviewed. The study showcased how Location-Allocation Model (L-AM) alongside Central Place Theory (CPT) was applied in Geographic Information System (GIS) environment to; map and analyze the spatial distribution of settlements; exploit their physical and economic interrelationships, and to explore their hierarchical and opportunistic influences. The study was purely spatial qualitative research which largely used secondary data such as; spatial location and distribution of settlements, population figures of settlements, network of roads linking them and other landform features. These were sourced from government ministries and open source consortium. GIS was used as a tool for processing and analyzing such spatial features within the dictum of CPT and L-AM to produce a comprehensive spatial digital plan for equitable and judicious location and distribution of fertilizer deports in the study area in an optimal way. Population threshold was used as yardstick for selecting suitable settlements that could stand as service centers to other hinterlands; this was accomplished using the query syntax in ArcMapTM. ArcGISTM’ network analyst was used in conducting location-allocation analysis for apportioning of groups of settlements around such service centers within a given threshold distance. Most of the techniques and models ever used by utility planners have been centered on straight distance to settlements using Euclidean distances. Such models neglect impedance cutoffs and the routing capabilities of networks. CPT and L-AM take into consideration both the influential characteristics of settlements and their routing connectivity. The study was undertaken in two terrorism ravaged Local Government Areas of Adamawa state. Four (4) existing depots in the study area were identified. 20 more depots in 20 villages were proposed using suitability analysis. Out of the 300 settlements mapped in the study area about 280 of such settlements where optimally grouped and allocated to the selected service centers respectfully within 2km impedance cutoff. This study complements the giant strides by the federal government of Nigeria by providing a blueprint for ensuring proper distribution of these public goods in the spirit of bringing succor to these terrorism ravaged populace. This will ardently at the same time help in boosting agricultural activities thereby lowering food shortage and raising per capita income as espoused by the government.

Keywords: central place theory, GIS, location-allocation, network analysis, urban and regional planning, welfare economics

Procedia PDF Downloads 131
4554 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 103
4553 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 316
4552 Risk Assessment Tools Applied to Deep Vein Thrombosis Patients Treated with Warfarin

Authors: Kylie Mueller, Nijole Bernaitis, Shailendra Anoopkumar-Dukie

Abstract:

Background: Vitamin K antagonists particularly warfarin is the most frequently used oral medication for deep vein thrombosis (DVT) treatment and prophylaxis. Time in therapeutic range (TITR) of the international normalised ratio (INR) is widely accepted as a measure to assess the quality of warfarin therapy. Multiple factors can affect warfarin control and the subsequent adverse outcomes including thromboembolic and bleeding events. Predictor models have been developed to assess potential contributing factors and measure the individual risk of these adverse events. These predictive models have been validated in atrial fibrillation (AF) patients, however, there is a lack of literature on whether these can be successfully applied to other warfarin users including DVT patients. Therefore, the aim of the study was to assess the ability of these risk models (HAS BLED and CHADS2) to predict haemorrhagic and ischaemic incidences in DVT patients treated with warfarin. Methods: A retrospective analysis of DVT patients receiving warfarin management by a private pathology clinic was conducted. Data was collected from November 2007 to September 2014 and included demographics, medical and drug history, INR targets and test results. Patients receiving continuous warfarin therapy with an INR reference range between 2.0 and 3.0 were included in the study with mean TITR calculated using the Rosendaal method. Bleeding and thromboembolic events were recorded and reported as incidences per patient. The haemorrhagic risk model HAS BLED and ischaemic risk model CHADS2 were applied to the data. Patients were then stratified into either the low, moderate, or high-risk categories. The analysis was conducted to determine if a correlation existed between risk assessment tool and patient outcomes. Data was analysed using GraphPad Instat Version 3 with a p value of <0.05 considered to be statistically significant. Patient characteristics were reported as mean and standard deviation for continuous data and categorical data reported as number and percentage. Results: Of the 533 patients included in the study, there were 268 (50.2%) female and 265 (49.8%) male patients with a mean age of 62.5 years (±16.4). The overall mean TITR was 78.3% (±12.7) with an overall haemorrhagic incidence of 0.41 events per patient. For the HAS BLED model, there was a haemorrhagic incidence of 0.08, 0.53, and 0.54 per patient in the low, moderate and high-risk categories respectively showing a statistically significant increase in incidence with increasing risk category. The CHADS2 model showed an increase in ischaemic events according to risk category with no ischaemic events in the low category, and an ischaemic incidence of 0.03 in the moderate category and 0.47 high-risk categories. Conclusion: An increasing haemorrhagic incidence correlated to an increase in the HAS BLED risk score in DVT patients treated with warfarin. Furthermore, a greater incidence of ischaemic events occurred in patients with an increase in CHADS2 category. In an Australian population of DVT patients, the HAS BLED and CHADS2 accurately predicts incidences of haemorrhage and ischaemic events respectively.

Keywords: anticoagulant agent, deep vein thrombosis, risk assessment, warfarin

Procedia PDF Downloads 253
4551 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: agricultural engineering, image processing, computer vision, flower detection

Procedia PDF Downloads 307
4550 Efficiency Enhancement in Solar Panel

Authors: R. S. Arun Raj

Abstract:

In today's climate of growing energy needs and increasing environmental issues, alternatives to the use of non-renewable and polluting fossil fuels have to be investigated. One such alternative is the solar energy. The SUN provides every hour as much energy as mankind consumes in one year. This paper clearly explains about the solar panel design and new models and methodologies that can be implemented for better utilization of solar energy. Minimisation of losses in solar panel as heat is my innovative idea revolves around. The pay back calculations by implementation of solar panels is also quoted.

Keywords: on-grid and off-grid systems, pyro-electric effect, pay-back calculations, solar panel

Procedia PDF Downloads 576
4549 Advanced Technologies and Algorithms for Efficient Portfolio Selection

Authors: Konstantinos Liagkouras, Konstantinos Metaxiotis

Abstract:

In this paper we present a classification of the various technologies applied for the solution of the portfolio selection problem according to the discipline and the methodological framework followed. We provide a concise presentation of the emerged categories and we are trying to identify which methods considered obsolete and which lie at the heart of the debate. On top of that, we provide a comparative study of the different technologies applied for efficient portfolio construction and we suggest potential paths for future work that lie at the intersection of the presented techniques.

Keywords: portfolio selection, optimization techniques, financial models, stochastic, heuristics

Procedia PDF Downloads 411
4548 A Ground Observation Based Climatology of Winter Fog: Study over the Indo-Gangetic Plains, India

Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva

Abstract:

Every year, fog formation over the Indo-Gangetic Plains (IGPs) of Indian region during the winter months of December and January is believed to create numerous hazards, inconvenience, and economic loss to the inhabitants of this densely populated region of Indian subcontinent. The aim of the paper is to analyze the spatial and temporal variability of winter fog over IGPs. Long term ground observations of visibility and other meteorological parameters (1971-2010) have been analyzed to understand the formation of fog phenomena and its relevance during the peak winter months of January and December over IGP of India. In order to examine the temporal variability, time series and trend analysis were carried out by using the Mann-Kendall Statistical test. Trend analysis performed by using the Mann-Kendall test, accepts the alternate hypothesis with 95% confidence level indicating that there exists a trend. Kendall tau’s statistics showed that there exists a positive correlation between time series and fog frequency. Further, the Theil and Sen’s median slope estimate showed that the magnitude of trend is positive. Magnitude is higher during January compared to December for the entire IGP except in December when it is high over the western IGP. Decade wise time series analysis revealed that there has been continuous increase in fog days. The net overall increase of 99 % was observed over IGP in last four decades. Diurnal variability and average daily persistence were computed by using descriptive statistical techniques. Geo-statistical analysis of fog was carried out to understand the spatial variability of fog. Geo-statistical analysis of fog revealed that IGP is a high fog prone zone with fog occurrence frequency of more than 66% days during the study period. Diurnal variability indicates the peak occurrence of fog is between 06:00 and 10:00 local time and average daily fog persistence extends to 5 to 7 hours during the peak winter season. The results would offer a new perspective to take proactive measures in reducing the irreparable damage that could be caused due to changing trends of fog.

Keywords: fog, climatology, Mann-Kendall test, trend analysis, spatial variability, temporal variability, visibility

Procedia PDF Downloads 229
4547 The TarMed Reform of 2014: A Causal Analysis of the Effects on the Behavior of Swiss Physicians

Authors: Camila Plaza, Stefan Felder

Abstract:

In October 2014, the TARMED reform was implemented in Switzerland. In an effort to even out the financial standing of general practitioners (including pediatricians) relative to that of specialists in the outpatient sector, the reform tackled two aspects: on the one hand, GPs would be able to bill an additional 9 CHF per patient, once per consult per day. This is referred to as the surcharge position. As a second measure, it reduced the fees for certain technical services targeted to specialists (e.g., imaging, surgical technical procedures, etc.). Given the fee-for-service reimbursement system in Switzerland, we predict that physicians reacted to the economic incentives of the reform by increasing the consults per patient and decreasing the average amount of time per consult. Within this framework, our treatment group is formed by GPs and our control group by those specialists who were not affected by the reform. Using monthly insurance claims panel data aggregated at the physician praxis level (provided by SASIS AG), for the period of January 2013-December 2015, we run difference in difference panel data models with physician and time fixed effects in order to test for the causal effects of the reform. We account for seasonality, and control for physician characteristics such as age, gender, specialty, and physician experience. Furthermore, we run the models on subgroups of physicians within our sample so as to account for heterogeneity and treatment intensities. Preliminary results support our hypothesis. We find evidence of an increase in consults per patients and a decrease in time per consult. Robustness checks do not significantly alter the results for our outcome variable of consults per patient. However, we do find a smaller effect of the reform for time per consult. Thus, the results of this paper could provide policymakers a better understanding of physician behavior and their sensitivity to financial incentives of reforms (both past and future) under the current reimbursement system.

Keywords: difference in differences, financial incentives, health reform, physician behavior

Procedia PDF Downloads 110
4546 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 126
4545 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods

Authors: Li-hsing Shih, Tzu-hsun Yen

Abstract:

Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.

Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling

Procedia PDF Downloads 43
4544 Surface Tension and Bulk Density of Ammonium Nitrate Solutions: A Molecular Dynamics Study

Authors: Sara Mosallanejad, Bogdan Z. Dlugogorski, Jeff Gore, Mohammednoor Altarawneh

Abstract:

Ammonium nitrate (NH­₄NO₃, AN) is commonly used as the main component of AN emulsion and fuel oil (ANFO) explosives, that use extensively in civilian and mining operations for underground development and tunneling applications. The emulsion formulation and wettability of AN prills, which affect the physical stability and detonation of ANFO, highly depend on the surface tension, density, viscosity of the used liquid. Therefore, for engineering applications of this material, the determination of density and surface tension of concentrated aqueous solutions of AN is essential. The molecular dynamics (MD) simulation method have been used to investigate the density and the surface tension of high concentrated ammonium nitrate solutions; up to its solubility limit in water. Non-polarisable models for water and ions have carried out the simulations, and the electronic continuum correction model (ECC) uses a scaling of the charges of the ions to apply the polarisation implicitly into the non-polarisable model. The results of calculated density and the surface tension of the solutions have been compared to available experimental values. Our MD simulations show that the non-polarisable model with full-charge ions overestimates the experimental results while the reduce-charge model for the ions fits very well with the experimental data. Ions in the solutions show repulsion from the interface using the non-polarisable force fields. However, when charges of the ions in the original model are scaled in line with the scaling factor of the ECC model, the ions create a double ionic layer near the interface by the migration of anions toward the interface while cations stay in the bulk of the solutions. Similar ions orientations near the interface were observed when polarisable models were used in simulations. In conclusion, applying the ECC model to the non-polarisable force field yields the density and surface tension of the AN solutions with high accuracy in comparison to the experimental measurements.

Keywords: ammonium nitrate, electronic continuum correction, non-polarisable force field, surface tension

Procedia PDF Downloads 208
4543 Balancing Resources and Demands in Activation Work with Young Adults: Exploring Potentials of the Job Demands-Resources Theory

Authors: Gurli Olsen, Ida Bruheim Jensen

Abstract:

Internationally, many young adults not in education, employment, or training (NEET) remain in temporary solutions such as labour market measures or other forms of welfare arrangements. These trends have been associated with ineffective labour market measures, an underfunded theoretical foundation for activation work, limited competence among social workers and labour market employees in using ordinary workplaces as job inclusion measures, and an overemphasis on young adults’ personal limitations such as health challenges and lack of motivation. Two competing models have been prominent in activation work: Place‐Then‐Train and Train‐Then‐Place. A traditional strategy for labour market measures has been to first motivate NEETs to sheltered work and training and then to the regular labour market (train then place). Measures such as Supported Employment (SE) and Individual Placement and Support (IPS) advocate for rapid entry into paid work at the regular labour market with close supervision and training from social workers, employees, and others (place then train). None of these models demonstrate unquestionable results. In this web of working life measures, young adults (NEETs) experience a lack of confidence in their own capabilities and coping strategies vis-á-vis labour market- and educational demands. Drawing on young adults’ own experiences, we argue that the Job Demands-Resources (JD-R) Theory can contribute to the theoretical and practical dimensions of activation work. This presentation will focus on what the JD-R theory entails and how it can be fruitful in activation work with NEETs (what and how). The overarching rationale of the JD-R theory is that an enduring balance between demands (e.g., deadlines, working hours) and resources (e.g., social support, enjoyable work tasks) is important for job performance for people in any job and potentially in other meaningful activities. Extensive research has demonstrated that a balance between demands and resources increases motivation and decreases stress. Nevertheless, we have not identified literature on the JD-R theory in activation work with young adults.

Keywords: activation work, job demands-resources theory, social work, theory development

Procedia PDF Downloads 64
4542 Spatial Distribution and Source Identification of Trace Elements in Surface Soil from Izmir Metropolitan Area

Authors: Melik Kara, Gulsah Tulger Kara

Abstract:

The soil is a crucial component of the ecosystem, and in industrial and urban areas it receives large amounts of trace elements from several sources. Therefore, accumulated pollutants in surface soils can be transported to different environmental components, such as deep soil, water, plants, and dust particles. While elemental contamination of soils is caused mainly by atmospheric deposition, soil also affects the air quality since enriched trace elemental contents in atmospheric particulate matter originate from resuspension of polluted soils. The objectives of this study were to determine the total and leachate concentrations of trace elements in soils of city area in Izmir and characterize their spatial distribution and to identify the possible sources of trace elements in surface soils. The surface soil samples were collected from 20 sites. They were analyzed for total element concentrations and leachate concentrations. Analyses of trace elements (Ag, Al, As, B, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Er, Eu, Fe, Ga, Gd, Hf, Ho, K, La, Li, Lu, Mg, Mn, Mo, Na, Nd, Ni, P, Pb, Pr, Rb, Sb, Sc, Se, Si, Sm, Sn, Sr, Tb, Th, Ti, Tl, Tm, U, V, W, Y, Yb, Zn and Zr) were carried out using ICP-MS (Inductively Coupled Plasma-Mass Spectrometer). The elemental concentrations were calculated along with overall median, kurtosis, and skewness statistics. Elemental composition indicated that the soil samples were dominated by crustal elements such as Si, Al, Fe, Ca, K, Mg and the sea salt element, Na which is typical for Aegean region. These elements were followed by Ti, P, Mn, Ba and Sr. On the other hand, Zn, Cr, V, Pb, Cu, and Ni (which are anthropogenic based elements) were measured as 61.6, 39.4, 37.9, 26.9, 22.4, and 19.4 mg/kg dw, respectively. The leachate element concentrations were showed similar sorting although their concentrations were much lower than total concentrations. In the study area, the spatial distribution patterns of elemental concentrations varied among sampling sites. The highest concentrations were measured in the vicinity of industrial areas and main roads. To determine the relationships among elements and to identify the possible sources, PCA (Principal Component Analysis) was applied to the data. The analysis resulted in six factors. The first factor exhibited high loadings of Co, K, Mn, Rb, V, Al, Fe, Ni, Ga, Se, and Cr. This factor could be interpreted as residential heating because of Co, K, Rb, and Se. The second factor associated positively with V, Al, Fe, Na, Ba, Ga, Sr, Ti, Se, and Si. Therefore, this factor presents mixed city dust. The third factor showed high loadings with Fe, Ni, Sb, As, Cr. This factor could be associated with industrial facilities. The fourth factor associated with Cu, Mo, Zn, Sn which are the marker elements of traffic. The fifth factor presents crustal dust, due to its high correlation with Si, Ca, and Mg. The last factor is loaded with Pb and Cd emitted from industrial activities.

Keywords: trace elements, surface soil, source apportionment, Izmir

Procedia PDF Downloads 122
4541 Antioxidant Potential of Pomegranate Rind Extract Attenuates Pain, Inflammation and Bone Damage in Experimental Rats

Authors: Ritu Karwasra, Surender Singh

Abstract:

Inflammation is an important physiological response of the body’s self-defense system that helps in eliminating and protecting organism from harmful stimuli and in tissue repair. It is a highly regulated protective response which helps in eliminating the initial cause of cell injury, and initiates the process of repair. The present study was designed to evaluate the ameliorative effect of pomegranate rind extract on pain and inflammation. Hydroalcoholic standardized rind extract of pomegranate at doses 50, 100 and 200 mg/kg and indomethacin (3 mg/kg) was tested against eddy’s hot plate induced thermal algesia, carrageenan (acute inflammation) and Complete Freund’s Adjuvant (chronic inflammation) induced models in Wistar rats. Parameters analyzed were inhibition of paw edema, measurement of joint diameter, levels of GSH, TBARS, SOD, TNF-α, radiographic imaging, tissue histology and synovial expression of pro-inflammatory cytokine receptor (TNF-R1). Radiological and light microscopical analysis were carried out to find out the bone damage in CFA-induced chronic inflammatory model. Findings of the present study revealed that pomegranate rind extract at a dose of 200 mg/kg caused a significant (p<0.05) reduction in paw swelling in both the inflammatory models. Nociceptive threshold was also significantly (p<0.05) improved. Immunohistochemical analysis of TNF-R1 in CFA-induced group showed elevated level, whereas reduction in level of TNF-R1 was observed in pomegranate (200 mg/kg). Henceforth, we might say that pomegranate produced a dose-dependent reduction in inflammation and pain along with the reduction in levels of oxidative stress markers and tissue histology, and the effect was found to be comparable to that of indomethacin. Thus, it can be concluded that pomegranate is a potential therapeutic target in the pathogenesis of inflammation and pain, and punicalagin is the major constituents found in rind extract might be responsible for the activity.

Keywords: carrageenan, inflammation, nociceptive-threshold, pomegranate, histopathology

Procedia PDF Downloads 203
4540 Indian Premier League (IPL) Score Prediction: Comparative Analysis of Machine Learning Models

Authors: Rohini Hariharan, Yazhini R, Bhamidipati Naga Shrikarti

Abstract:

In the realm of cricket, particularly within the context of the Indian Premier League (IPL), the ability to predict team scores accurately holds significant importance for both cricket enthusiasts and stakeholders alike. This paper presents a comprehensive study on IPL score prediction utilizing various machine learning algorithms, including Support Vector Machines (SVM), XGBoost, Multiple Regression, Linear Regression, K-nearest neighbors (KNN), and Random Forest. Through meticulous data preprocessing, feature engineering, and model selection, we aimed to develop a robust predictive framework capable of forecasting team scores with high precision. Our experimentation involved the analysis of historical IPL match data encompassing diverse match and player statistics. Leveraging this data, we employed state-of-the-art machine learning techniques to train and evaluate the performance of each model. Notably, Multiple Regression emerged as the top-performing algorithm, achieving an impressive accuracy of 77.19% and a precision of 54.05% (within a threshold of +/- 10 runs). This research contributes to the advancement of sports analytics by demonstrating the efficacy of machine learning in predicting IPL team scores. The findings underscore the potential of advanced predictive modeling techniques to provide valuable insights for cricket enthusiasts, team management, and betting agencies. Additionally, this study serves as a benchmark for future research endeavors aimed at enhancing the accuracy and interpretability of IPL score prediction models.

Keywords: indian premier league (IPL), cricket, score prediction, machine learning, support vector machines (SVM), xgboost, multiple regression, linear regression, k-nearest neighbors (KNN), random forest, sports analytics

Procedia PDF Downloads 29
4539 Impact of Air Flow Structure on Distinct Shape of Differential Pressure Devices

Authors: A. Bertašienė

Abstract:

Energy harvesting from any structure makes a challenge. Different structure of air/wind flows in industrial, environmental and residential applications emerge the real flow investigation in detail. Many of the application fields are hardly achievable to the detailed description due to the lack of up-to-date statistical data analysis. In situ measurements aim crucial investments thus the simulation methods come to implement structural analysis of the flows. Different configurations of testing environment give an overview how important is the simple structure of field in limited area on efficiency of the system operation and the energy output. Several configurations of modeled working sections in air flow test facility was implemented in CFD ANSYS environment to compare experimentally and numerically air flow development stages and forms that make effects on efficiency of devices and processes. Effective form and amount of these flows under different geometry cases define the manner of instruments/devices that measure fluid flow parameters for effective operation of any system and emission flows to define. Different fluid flow regimes were examined to show the impact of fluctuations on the development of the whole volume of the flow in specific environment. The obtained results rise the discussion on how these simulated flow fields are similar to real application ones. Experimental results have some discrepancies from simulation ones due to the models implemented to fluid flow analysis in initial stage, not developed one and due to the difficulties of models to cover transitional regimes. Recommendations are essential for energy harvesting systems in both, indoor and outdoor cases. Further investigations aim to be shifted to experimental analysis of flow under laboratory conditions using state-of-the-art techniques as flow visualization tool and later on to in situ situations that is complicated, cost and time consuming study.

Keywords: fluid flow, initial region, tube coefficient, distinct shape

Procedia PDF Downloads 326
4538 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 113
4537 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty

Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton

Abstract:

Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.

Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique

Procedia PDF Downloads 69
4536 Producing Graphical User Interface from Activity Diagrams

Authors: Ebitisam K. Elberkawi, Mohamed M. Elammari

Abstract:

Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through activity diagrams (AD).

Keywords: activity diagram, graphical user interface, GUI components, program

Procedia PDF Downloads 445
4535 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 106
4534 Vehicles Analysis, Assessment and Redesign Related to Ergonomics and Human Factors

Authors: Susana Aragoneses Garrido

Abstract:

Every day, the roads are scenery of numerous accidents involving vehicles, producing thousands of deaths and serious injuries all over the world. Investigations have revealed that Human Factors (HF) are one of the main causes of road accidents in modern societies. Distracted driving (including external or internal aspects of the vehicle), which is considered as a human factor, is a serious and emergent risk to road safety. Consequently, a further analysis regarding this issue is essential due to its transcendence on today’s society. The objectives of this investigation are the detection and assessment of the HF in order to provide solutions (including a better vehicle design), which might mitigate road accidents. The methodology of the project is divided in different phases. First, a statistical analysis of public databases is provided between Spain and The UK. Second, data is classified in order to analyse the major causes involved in road accidents. Third, a simulation between different paths and vehicles is presented. The causes related to the HF are assessed by Failure Mode and Effects Analysis (FMEA). Fourth, different car models are evaluated using the Rapid Upper Body Assessment (RULA). Additionally, the JACK SIEMENS PLM tool is used with the intention of evaluating the Human Factor causes and providing the redesign of the vehicles. Finally, improvements in the car design are proposed with the intention of reducing the implication of HF in traffic accidents. The results from the statistical analysis, the simulations and the evaluations confirm that accidents are an important issue in today’s society, especially the accidents caused by HF resembling distractions. The results explore the reduction of external and internal HF through the global analysis risk of vehicle accidents. Moreover, the evaluation of the different car models using RULA method and the JACK SIEMENS PLM prove the importance of having a good regulation of the driver’s seat in order to avoid harmful postures and therefore distractions. For this reason, a car redesign is proposed for the driver to acquire the optimum position and consequently reducing the human factors in road accidents.

Keywords: analysis vehicles, asssesment, ergonomics, car redesign

Procedia PDF Downloads 321
4533 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 83
4532 Silver Nanoparticles Synthesized in Plant Extract Against Acute Hepatopancreatic Necrosis of Shrimp: Estimated By Multiple Models

Authors: Luz del Carmen Rubí Félix Peña, Jose Adan Felix-Ortiz, Ely Sara Lopez-Alvarez, Wenceslao Valenzuela-Quiñonez

Abstract:

On a global scale, Mexico is the sixth largest producer of farmed white shrimp (Penaeus vannamei). The activity suffered significant economic losses due to acute hepatopancreatic necrosis (AHPND) caused by a strain of Vibrio parahaemolyticus. For control, the first option is the application of antibiotics in food, causing changes in the environment and bacterial communities, which has produced greater virulence and resistance of pathogenic bacteria. An alternative treatment is silver nanoparticles (AgNPs) generated by green synthesis, which have shown an antibacterial capacity by destroying the cell membrane or denaturing the cell. However, the doses at which these are effective are still unknown. The aim is to calculate the minimum inhibitory concentration (MIC) using the Gompertz, Richard, and Logistic model of biosynthesized AgNPs against a strain of V. parahaemolyticus. Through the testing of different formulations of AgNPs synthesized from Euphorbia prostrate (Ep) extracts against V. parahaemolyticus causing AHPND in white shrimp. Aqueous and ethanol extracts were obtained, and the concentration of phenols and flavonoids was quantified. In the antibiograms, AgNPs were formulated in ethanol extracts of Ep (20 and 30%). The inhibition halo at well dilution test were 18±1.7 and 17.67±2.1 mm against V. parahaemolyticus. A broth microdilution was performed with the inhibitory agents (aqueous and ethanolic extracts and AgNPs) and 20 μL of the inoculum of V. parahaemolyticus. The MIC for AgNPs was 6.2-9.3 μg/mL and for ethanol extract of 49-73 mg/mL. The Akaike index (AIC) was used to choose the Gompertz model for ethanol extracts of Ep as the best data descriptor (AIC=204.8, 10%; 45.5, 20%, and 204.8, 30%). The Richards model was at AgNPs ethanol extract with AIC=-9.3 (10%), -17.5 (20 and 30%). The MIC calculated for EP extracts with the modified Gompertz model were 20 mg/mL (10% and 20% extract) and 40 mg/mL at 30%, while Richard was winner for AgNPs-synthesized it was 5 μg/mL (10% and 20%) and 8 μg/mL (30%). The solver tool Excel was used for the calculations of the models and inhibition curves against V.parahaemolyticus.

Keywords: green synthesis, euphorbia prostata, phenols, flavonoids, bactericide

Procedia PDF Downloads 89
4531 Acute and Subacute Toxicity of the Aqueous Extract of the Bark Stems of Balanites aegyptiaca (L.) Delile in Wistar Rats

Authors: Brahim Sow

Abstract:

Background: Throughout West Africa, Balanites aegyptiaca (BA), or Zygophyllaceae, is widely used in traditional medicine to treat diabetes, hypertension, inflammation, malaria and liver disorders. In our recent research, we found that BA has nephroprotective potential against diabetes mellitus, hypertension and kidney disorders. However, to our knowledge, no systematic studies have been carried out on its derivative (toxicity) profile. Aim of the study: The study was conducted to assess the potential potency of the hydroalcoholic extract of BA bark in rats by the acute and sub-acute oral route. Materials and methods: Male and female rats in the acute depression study received BA extract orally at single doses of 500 mg/kg, 2000 mg/kg, 3000 mg/kg and 5000 mg/kg (n = 6 per group/sex). To assess acute depression, abnormal behaviour, toxic symptoms, weight and death were observed for 14 consecutive days. For the subacute impairment study, Wistar rats received the extract orally at doses of 125, 250 and 500 mg/kg (n=6 per group/sex) per day for 28 days. Behaviour and body weight were monitored daily. At the end of the treatment period, biochemical, haematological and histopathological examinations were performed, and gross and histopathological examinations of several organs were carried out. To determine the presence or absence of phytochemicals, the BA extract was subjected to gage phage chromatographic examination. Results: The absence of absorption chromatography of BA indicates the absence of cyanide groups. This suggests that the BA extract does not contain toxic substances. No mortality or adverse effects were observed at 5000 mg/kg in the acute depression test. With regard to body weight, general behaviour, relative organ weights, haematological and biochemical parameters, BA extract did not induce any mortality or potentially treatment-related effects in the sub-acute study. The normal architecture of the vital organs was revealed by histopathological examination, indicating the absence of morphological alterations. Conclusion: BA extract administered orally for 28 days at doses up to 500 mg/kg did not cause toxicological damage in rats in the present study. The median lethal dose (LD50) of the extract was estimated to be over 5000 mg/kg in an acute hyperglycaemia study.

Keywords: Balanites aegyptiaca L Delile, haematology, biochemistry, rat

Procedia PDF Downloads 56
4530 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone

Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg

Abstract:

Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.

Keywords: energy simulation, office building, tropical climate, zero energy buildings

Procedia PDF Downloads 167
4529 Primary School Students’ Modeling Processes: Crime Problem

Authors: Neslihan Sahin Celik, Ali Eraslan

Abstract:

As a result of PISA (Program for International Student Assessments) survey that tests how well students can apply the knowledge and skills they have learned at school to real-life challenges, the new and redesigned mathematics education programs in many countries emphasize the necessity for the students to face complex and multifaceted problem situations and gain experience in this sense allowing them to develop new skills and mathematical thinking to prepare them for their future life after school. At this point, mathematical models and modeling approaches can be utilized in the analysis of complex problems which represent real-life situations in which students can actively participate. In particular, model eliciting activities that bring about situations which allow the students to create solutions to problems and which involve mathematical modeling must be used right from primary school years, allowing them to face such complex, real-life situations from early childhood period. A qualitative study was conducted in a university foundation primary school in the city center of a big province in 2013-2014 academic years. The participants were 4th grade students in a primary school. After a four-week preliminary study applied to a fourth-grade classroom, three students included in the focus group were selected using criterion sampling technique. A focus group of three students was videotaped as they worked on the Crime Problem. The conversation of the group was transcribed, examined with students’ written work and then analyzed through the lens of Blum and Ferri’s modeling processing cycle. The results showed that primary fourth-grade students can successfully work with model eliciting problem while they encounter some difficulties in the modeling processes. In particular, they developed new ideas based on different assumptions, identified the patterns among variables and established a variety of models. On the other hand, they had trouble focusing on problems and occasionally had breaks in the process.

Keywords: primary school, modeling, mathematical modeling, crime problem

Procedia PDF Downloads 386