Search results for: multiple input multiple output
322 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 38321 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria
Authors: Yusuf U. Datti
Abstract:
It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability
Procedia PDF Downloads 183320 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model
Authors: Pappu Kumar, Ajai Singh, Anshuman Singh
Abstract:
There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.Keywords: WEAP model, water demand analysis, Ranchi, scenarios
Procedia PDF Downloads 419319 Establishing Community-Based Pro-Biodiversity Enterprise in the Philippines: A Climate Change Adaptation Strategy towards Agro-Biodiversity Conservation and Local Green Economic Development
Authors: Dina Magnaye
Abstract:
In the Philippines, the performance of the agricultural sector is gauged through crop productivity and returns from farm production rather than the biodiversity in the agricultural ecosystem. Agricultural development hinges on the overall goal of increasing productivity through intensive agriculture, monoculture system, utilization of high yielding varieties in plants, and genetic upgrading in animals. This merits an analysis of the role of agro-biodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. These enterprises conserve biodiversity while equitably sharing production income in the utilization of biological resources. The study aims to determine how community-based pro-biodiversity enterprises become instrumental in local climate change adaptation and agro-biodiversity conservation as input to local green economic development planning. It also involves an assessment of the role of agrobiodiversity in terms of increasing productivity, food security and economic returns from community-based pro-biodiversity enterprises. The perceptions of the local community members both in urban and upland rural areas on community-based pro-biodiversity enterprises were evaluated. These served as a basis in developing a planning modality that can be mainstreamed in the management of local green economic enterprises to benefit the environment, provide local income opportunities, conserve species diversity, and sustain environment-friendly farming systems and practices. The interviews conducted with organic farmer-owners, entrepreneur-organic farmers, and organic farm workers revealed that pro-biodiversity enterprise such as organic farming involved the cyclic use of natural resources within the carrying capacity of a farm; recognition of the value of tradition and culture especially in the upland rural area; enhancement of socio-economic capacity; conservation of ecosystems in harmony with nature; and climate change mitigation. The suggested planning modality for community-based pro-biodiversity enterprises for a green economy encompasses four (4) phases to include community resource or capital asset profiling; stakeholder vision development; strategy formulation for sustained enterprises; and monitoring and evaluation.Keywords: agro-biodiversity, agro-biodiversity conservation, local green economy, organic farming, pro-biodiversity enterprise
Procedia PDF Downloads 362318 Optimized Renewable Energy Mix for Energy Saving in Waste Water Treatment Plants
Authors: J. D. García Espinel, Paula Pérez Sánchez, Carlos Egea Ruiz, Carlos Lardín Mifsut, Andrés López-Aranguren Oliver
Abstract:
This paper shortly describes three main actuations over a Waste Water Treatment Plant (WWTP) for reducing its energy consumption: Optimization of the biological reactor in the aeration stage by including new control algorithms and introducing new efficient equipment, the installation of an innovative hybrid system with zero Grid injection (formed by 100kW of PV energy and 5 kW of mini-wind energy generation) and an intelligent management system for load consumption and energy generation control in the most optimum way. This project called RENEWAT, involved in the European Commission call LIFE 2013, has the main objective of reducing the energy consumptions through different actions on the processes which take place in a WWTP and introducing renewable energies on these treatment plants, with the purpose of promoting the usage of treated waste water for irrigation and decreasing the C02 gas emissions. WWTP is always required before waste water can be reused for irrigation or discharged in water bodies. However, the energetic demand of the treatment process is high enough for making the price of treated water to exceed the one for drinkable water. This makes any policy very difficult to encourage the re-use of treated water, with a great impact on the water cycle, particularly in those areas suffering hydric stress or deficiency. The cost of treating waste water involves another climate-change related burden: the energy necessary for the process is obtained mainly from the electric network, which is, in most of the cases in Europe, energy obtained from the burning of fossil fuels. The innovative part of this project is based on the implementation, adaptation and integration of solutions for this problem, together with a new concept of the integration of energy input and operative energy demand. Moreover, there is an important qualitative jump between the technologies used and the alleged technologies to use in the project which give it an innovative character, due to the fact that there are no similar previous experiences of a WWTP including an intelligent discrimination of energy sources, integrating renewable ones (PV and Wind) and the grid.Keywords: aeration system, biological reactor, CO2 emissions, energy efficiency, hybrid systems, LIFE 2013 call, process optimization, renewable energy sources, wasted water treatment plants
Procedia PDF Downloads 352317 Subjective Temporal Resources: On the Relationship Between Time Perspective and Chronic Time Pressure to Burnout
Authors: Diamant Irene, Dar Tamar
Abstract:
Burnout, conceptualized within the framework of stress research, is to a large extent a result of a threat on resources of time or a feeling of time shortage. In reaction to numerous tasks, deadlines, high output, management of different duties encompassing work-home conflicts, many individuals experience ‘time pressure’. Time pressure is characterized as the perception of a lack of available time in relation to the amount of workload. It can be a result of local objective constraints, but it can also be a chronic attribute in coping with life. As such, time pressure is associated in the literature with general stress experience and can therefore be a direct, contributory burnout factor. The present study examines the relation of chronic time pressure – feeling of time shortage and of being rushed, with another central aspect in subjective temporal experience - time perspective. Time perspective is a stable personal disposition, capturing the extent to which people subjectively remember the past, live the present and\or anticipate the future. Based on Hobfoll’s Conservation of Resources Theory, it was hypothesized that individuals with chronic time pressure would experience a permanent threat on their time resources resulting in relatively increased burnout. In addition, it was hypothesized that different time perspective profiles, based on Zimbardo’s typology of five dimensions – Past Positive, Past Negative, Present Hedonistic, Present Fatalistic, and Future, would be related to different magnitudes of chronic time pressure and of burnout. We expected that individuals with ‘Past Negative’ or ‘Present Fatalist’ time perspectives would experience more burnout, with chronic time pressure being a moderator variable. Conversely, individuals with a ‘Present Hedonistic’ - with little concern with the future consequences of actions, would experience less chronic time pressure and less burnout. Another temporal experience angle examined in this study is the difference between the actual distribution of time (as in a typical day) versus desired distribution of time (such as would have been distributed optimally during a day). It was hypothesized that there would be a positive correlation between the gap between these time distributions and chronic time pressure and burnout. Data was collected through an online self-reporting survey distributed on social networks, with 240 participants (aged 21-65) recruited through convenience and snowball sampling methods from various organizational sectors. The results of the present study support the hypotheses and constitute a basis for future debate regarding the elements of burnout in the modern work environment, with an emphasis on subjective temporal experience. Our findings point to the importance of chronic and stable temporal experiences, as time pressure and time perspective, in occupational experience. The findings are also discussed with a view to the development of practical methods of burnout prevention.Keywords: conservation of resources, burnout, time pressure, time perspective
Procedia PDF Downloads 173316 Determinants of Budget Performance in an Oil-Based Economy
Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi
Abstract:
Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue
Procedia PDF Downloads 172315 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 73314 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 140313 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method
Authors: Emmanuel Ophel Gilbert, Williams Speret
Abstract:
The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids
Procedia PDF Downloads 176312 Economic Impact of Drought on Agricultural Society: Evidence Based on a Village Study in Maharashtra, India
Authors: Harshan Tee Pee
Abstract:
Climate elements include surface temperatures, rainfall patterns, humidity, type and amount of cloudiness, air pressure and wind speed and direction. Change in one element can have an impact on the regional climate. The scientific predictions indicate that global climate change will increase the number of extreme events, leading to more frequent natural hazards. Global warming is likely to intensify the risk of drought in certain parts and also leading to increased rainfall in some other parts. Drought is a slow advancing disaster and creeping phenomenon– which accumulate slowly over a long period of time. Droughts are naturally linked with aridity. But droughts occur over most parts of the world (both wet and humid regions) and create severe impacts on agriculture, basic household welfare and ecosystems. Drought condition occurs at least every three years in India. India is one among the most vulnerable drought prone countries in the world. The economic impacts resulting from extreme environmental events and disasters are huge as a result of disruption in many economic activities. The focus of this paper is to develop a comprehensive understanding about the distributional impacts of disaster, especially impact of drought on agricultural production and income through a panel study (drought year and one year after the drought) in Raikhel village, Maharashtra, India. The major findings of the study indicate that cultivating area as well as the number of cultivating households reduced after the drought, indicating a shift in the livelihood- households moved from agriculture to non-agriculture. Decline in the gross cropped area and production of various crops depended on the negative income from these crops in the previous agriculture season. All the landholding categories of households except landlords had negative income in the drought year and also the income disparities between the households were higher in that year. In the drought year, the cost of cultivation was higher for all the landholding categories due to the increased cost for irrigation and input cost. In the drought year, agriculture products (50 per cent of the total products) were used for household consumption rather than selling in the market. It is evident from the study that livelihood which was based on natural resources became less attractive to the people to due to the risk involved in it and people were moving to less risk livelihood for their sustenance.Keywords: climate change, drought, agriculture economics, disaster impact
Procedia PDF Downloads 118311 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth
Authors: Hemant Upadhyay, Tarun Kumar Kundu
Abstract:
It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.Keywords: blast furnace, hearth, deadman, hotmetal
Procedia PDF Downloads 184310 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances
Authors: P. Mounnarath, U. Schmitz, Ch. Zhang
Abstract:
Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis
Procedia PDF Downloads 435309 Demographic Shrinkage and Reshaping Regional Policy of Lithuania in Economic Geographic Context
Authors: Eduardas Spiriajevas
Abstract:
Since the end of the 20th century, when Lithuania regained its independence, a process of demographic shrinkage started. Recently, it affects the efficiency of implementation of actions related to regional development policy and geographic scopes of created value added in the regions. The demographic structures of human resources reflect onto the regions and their economic geographic environment. Due to reshaping economies and state reforms on restructuration of economic branches such as agriculture and industry, it affects the economic significance of services’ sector. These processes influence the competitiveness of labor market and its demographic characteristics. Such vivid consequences are appropriate for the structures of human migrations, which affected the processes of demographic ageing of human resources in the regions, especially in peripheral ones. These phenomena of modern times induce the demographic shrinkage of society and its economic geographic characteristics in the actions of regional development and in regional policy. The internal and external migrations of population captured numerous regional economic disparities, and influenced on territorial density and concentration of population of the country and created the economies of spatial unevenness in such small geographically compact country as Lithuania. The processes of territorial reshaping of distribution of population create new regions and their economic environment, which is not corresponding to the main principles of regional policy and its power to create the well-being and to promote the attractiveness for economic development. These are the new challenges of national regional policy and it should be researched in a systematic way of taking into consideration the analytical approaches of regional economy in the context of economic geographic research methods. A comparative territorial analysis according to administrative division of Lithuania in relation to retrospective approach and introduction of method of location quotients, both give the results of economic geographic character with cartographic representations using the tools of spatial analysis provided by technologies of Geographic Information Systems. A set of these research methods provide the new spatially evidenced based results, which must be taken into consideration in reshaping of national regional policy in economic geographic context. Due to demographic shrinkage and increasing differentiation of economic developments within the regions, an input of economic geographic dimension is inevitable. In order to sustain territorial balanced economic development, there is a need to strengthen the roles of regional centers (towns) and to empower them with new economic functionalities for revitalization of peripheral regions, and to increase their economic competitiveness and social capacities on national scale.Keywords: demographic shrinkage, economic geography, Lithuania, regions
Procedia PDF Downloads 160308 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 490307 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 99306 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells
Authors: Victorita Radulescu
Abstract:
Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils
Procedia PDF Downloads 155305 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 54304 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms
Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic
Abstract:
Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.Keywords: adsorption, diffusion, non-linear flow, shale gas production
Procedia PDF Downloads 165303 Predictors of Pericardial Effusion Requiring Drainage Following Coronary Artery Bypass Graft Surgery: A Retrospective Analysis
Authors: Nicholas McNamara, John Brookes, Michael Williams, Manish Mathew, Elizabeth Brookes, Tristan Yan, Paul Bannon
Abstract:
Objective: Pericardial effusions are an uncommon but potentially fatal complication after cardiac surgery. The goal of this study was to describe the incidence and risk factors associated with the development of pericardial effusion requiring drainage after coronary artery bypass graft surgery (CABG). Methods: A retrospective analysis was undertaken using prospectively collected data. All adult patients who underwent CABG at our institution between 1st January 2017 and 31st December 2018 were included. Pericardial effusion was diagnosed using transthoracic echocardiography (TTE) performed for clinical suspicion of pre-tamponade or tamponade. Drainage was undertaken if considered clinically necessary and performed via a sub-xiphoid incision, pericardiocentesis, or via re-sternotomy at the discretion of the treating surgeon. Patient demographics, operative characteristics, anticoagulant exposure, and postoperative outcomes were examined to identify those variables associated with the development of pericardial effusion requiring drainage. Tests of association were performed using the Fischer exact test for dichotomous variables and the Student t-test for continuous variables. Logistic regression models were used to determine univariate predictors of pericardial effusion requiring drainage. Results: Between January 1st, 2017, and December 31st, 2018, a total of 408 patients underwent CABG at our institution, and eight (1.9%) required drainage of pericardial effusion. There was no difference in age, gender, or the proportion of patients on preoperative therapeutic heparin between the study and control groups. Univariate analysis identified preoperative atrial arrhythmia (37.5% vs 8.8%, p = 0.03), reduced left ventricular ejection fraction (47% vs 56%, p = 0.04), longer cardiopulmonary bypass (130 vs 84 min, p < 0.01) and cross-clamp (107 vs 62 min, p < 0.01) times, higher drain output in the first four postoperative hours (420 vs 213 mL, p <0.01), postoperative atrial fibrillation (100% vs 32%, p < 0.01), and pleural effusion requiring drainage (87.5% vs 12.5%, p < 0.01) to be associated with development of pericardial effusion requiring drainage. Conclusion: In this study, the incidence of pericardial effusion requiring drainage was 1.9%. Several factors, mainly related to preoperative or postoperative arrhythmia, length of surgery, and pleural effusion requiring drainage, were identified to be associated with developing clinically significant pericardial effusions. High clinical suspicion and low threshold for transthoracic echo are pertinent to ensure this potentially lethal condition is not missed.Keywords: coronary artery bypass, pericardial effusion, pericardiocentesis, tamponade, sub-xiphoid drainage
Procedia PDF Downloads 161302 A Research Study of the Inclusiveness of VR Headsets for Higher Education
Authors: Fredrick Forster, Gareth Ward, Matthew Tubby, Pamela Lithgow, Anne Nortcliffe
Abstract:
This paper presents the results from a research study of random adult participants accessing one of four different commercially available Virtual Reality (VR) Head Mounted Displays (HMDs) and completing a post user experience reflection questionnaire. The research sort to understand how inclusive commercially available VR HMDs are and identify any associated barriers that could impact the widespread adoption of the devices, specifically in Higher Education (HE). In the UK, education providers are legally required under the Equality Act 2010 to ensure all education facilities are inclusive and reasonable adjustments can be applied appropriately. The research specifically aimed to identify the considerations that academics and learning technologists need to make when adopting the use of commercial VR HMDs in HE classrooms, namely cybersickness, user comfort, Interpupillary Distance, inclusiveness, and user perceptions of VR. The research approach was designed to build upon previously published research on user reflections on presence, usability, and overall HMD comfort, using quantitative and qualitative research methods by way of a questionnaire. The quantitative data included the recording of physical characteristics such as the distance between eye pupils, known as Interpupillary Distance (IPD). VR HMDs require each user’s IPD measurement to enable the focusing of the VR HMDs virtual camera output to the right position in front of the eyes of the user. In addition, the questionnaire captured users’ qualitative reflections and evaluations of the broader accessibility characteristics of the VR HMDs. The initial research activity was accomplished by enabling a random sample of visitors, staff, and students at Canterbury Christ Church University, Kent to use a VR HMD for a set period of time and asking them to complete the post user experience questionnaire. The study identified that there is little correlation between users who experience cyber sickness and car sickness. Also, users with a smaller IPD than average (typically associated with females) were able to use the VR HMDs successfully; however, users with a larger than average IPD reported an impeded experience. This indicates that there is reduced inclusiveness for the tested VR HMDs for users with a higher-than-average IPD which is typically associated with males of certain ethnicities. As action education research, these initial findings will be used to refine the research method and conduct further investigations with the aim to provide verification and validation of the accessibility of current commercial VR HMDs. The conference presentation will report on the research results of the initial study and subsequent follow up studies with a larger variety of adult volunteers.Keywords: virtual reality, education technology, inclusive technology, higher education
Procedia PDF Downloads 68301 Preparation of β-Polyvinylidene Fluoride Film for Self-Charging Lithium-Ion Battery
Authors: Nursultan Turdakyn, Alisher Medeubayev, Didar Meiramov, Zhibek Bekezhankyzy, Desmond Adair, Gulnur Kalimuldina
Abstract:
In recent years the development of sustainable energy sources is getting extensive research interest due to the ever-growing demand for energy. As an alternative energy source to power small electronic devices, ambient energy harvesting from vibration or human body motion is considered a potential candidate. Despite the enormous progress in the field of battery research in terms of safety, lifecycle and energy density in about three decades, it has not reached the level to conveniently power wearable electronic devices such as smartwatches, bands, hearing aids, etc. For this reason, the development of self-charging power units with excellent flexibility and integrated energy harvesting and storage is crucial. Self-powering is a key idea that makes it possible for the system to operate sustainably, which is now getting more acceptance in many fields in the area of sensor networks, the internet of things (IoT) and implantable in-vivo medical devices. For solving this energy harvesting issue, the self-powering nanogenerators (NGS) were proposed and proved their high effectiveness. Usually, sustainable power is delivered through energy harvesting and storage devices by connecting them to the power management circuit; as for energy storage, the Li-ion battery (LIB) is one of the most effective technologies. Through the movement of Li ions under the driving of an externally applied voltage source, the electrochemical reactions generate the anode and cathode, storing the electrical energy as the chemical energy. In this paper, we present a simultaneous process of converting the mechanical energy into chemical energy in a way that NG and LIB are combined as an all-in-one power system. The electrospinning method was used as an initial step for the development of such a system with a β-PVDF separator. The obtained film showed promising voltage output at different stress frequencies. X-ray diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FT-IR) analysis showed a high percentage of β phase of PVDF polymer material. Moreover, it was found that the addition of 1 wt.% of BTO (Barium Titanate) results in higher quality fibers. When comparing pure PVDF solution with 20 wt.% content and the one with BTO added the latter was more viscous. Hence, the sample was electrospun uniformly without any beads. Lastly, to test the sensor application of such film, a particular testing device has been developed. With this device, the force of a finger tap can be applied at different frequencies so that electrical signal generation is validated.Keywords: electrospinning, nanogenerators, piezoelectric PVDF, self-charging li-ion batteries
Procedia PDF Downloads 162300 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder
Authors: Jessica Scher Lisa, Eric Shyman
Abstract:
Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.Keywords: autism spectrum disorder, neurology, visual processing, weak coherence
Procedia PDF Downloads 127299 Fibrin Glue Reinforcement of Choledochotomy Closure Suture Line for Prevention of Bile Leak in Patients Undergoing Laparoscopic Common Bile Duct Exploration with Primary Closure: A Pilot Study
Authors: Rahul Jain, Jagdish Chander, Anish Gupta
Abstract:
Introduction: Laparoscopic common bile duct exploration (LCBDE) allows cholecystectomy and the removal of common bile duct (CBD) stones to be performed during the same sitting, thereby decreasing hospital stay. CBD exploration through choledochotomy can be closed primarily with an absorbable suture material, but can lead to biliary leakage postoperatively. In this study we tried to find a solution to further lower the incidence of bile leakage by using fibrin glue to reinforce the sutures put on choledochotomy suture line. It has haemostatic and sealing action, through strengthening the last step of the physiological coagulation and biostimulation, which favours the formation of new tissue matrix. Methodology: This study was conducted at a tertiary care teaching hospital in New Delhi, India, from 2011 to 2013. 20 patients with CBD stones documented on MRCP with CBD diameter of 9 mm or more were included in this study. Patients were randomized into two groups namely Group A in which choledochotomy was closed with polyglactin 4-0 suture and suture line reinforced with fibrin glue, and Group ‘B’ in which choledochotomy was closed with polyglactin 4-0 suture alone. Both the groups were evaluated and compared on clinical parameters such as operative time, drain content, drain output, no. of days drain was required, blood loss & transfusion requirements, length of postoperative hospital stay and conversion to open surgery. Results: The operative time for Group A ranged from 60 to 210 min (mean 131.50 min) and Group B 65 to 300 min (mean 140 minutes). The blood loss in group A ranged from 10 to 120 ml (mean 51.50 ml), in group B it ranged from 10 to 200 ml (mean 53.50 ml). In Group A, there was no case of bile leak but there was bile leak in 2 cases in Group B, minimum 0 and maximum 900 ml with a mean of 97 ml and p value of 0.147 with no statistically significant difference in bile leak in test and control groups. The minimum and maximum serous drainage in Group A was nil & 80 ml (mean 11 ml) and in Group B was nil & 270 ml (mean 72.50 ml). The p value came as 0.028 which is statistically significant. Thus serous leakage in Group A was significantly less than in Group B. The drains in Group A were removed from 2 to 4 days (mean: 3 days) while in Group B from 2 to 9 days (mean: 3.9 days). The patients in Group A stayed in hospital post operatively from 3 to 8 days (mean: 5.30) while in Group B it ranged from 3 to 10 days with a mean of 5 days. Conclusion: Fibrin glue application on CBD decreases bile leakage but in statistically insignificant manner. Fibrin glue application on CBD can significantly decrease post operative serous drainage after LCBDE. Fibrin glue application on CBD is safe and easy technique without any significant adverse effects and can help less experienced surgeons performing LCBDE.Keywords: bile leak, fibrin glue, LCBDE, serous leak
Procedia PDF Downloads 215298 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 169297 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 155296 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 146295 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 245294 The Flooding Management Strategy in Urban Areas: Reusing Public Facilities Land as Flood-Detention Space for Multi-Purpose
Authors: Hsiao-Ting Huang, Chang Hsueh-Sheng
Abstract:
Taiwan is an island country which is affected by the monsoon deeply. Under the climate change, the frequency of extreme rainstorm by typhoon becomes more and more often Since 2000. When the extreme rainstorm comes, it will cause serious damage in Taiwan, especially in urban area. It is suffered by the flooding and the government take it as the urgent issue. On the past, the land use of urban planning does not take flood-detention into consideration. With the development of the city, the impermeable surface increase and most of the people live in urban area. It means there is the highly vulnerability in the urban area, but it cannot deal with the surface runoff and the flooding. However, building the detention pond in hydraulic engineering way to solve the problem is not feasible in urban area. The land expropriation is the most expensive construction of the detention pond in the urban area, and the government cannot afford it. Therefore, the management strategy of flooding in urban area should use the existing resource, public facilities land. It can archive the performance of flood-detention through providing the public facilities land with the detention function. As multi-use public facilities land, it also can show the combination of the land use and water agency. To this purpose, this research generalizes the factors of multi-use for public facilities land as flood-detention space with literature review. The factors can be divided into two categories: environmental factors and conditions of public facilities. Environmental factors including three factors: the terrain elevation, the inundation potential and the distance from the drainage system. In the other hand, there are six factors for conditions of public facilities, including area, building rate, the maximum of available ratio etc. Each of them will be according to it characteristic to given the weight for the land use suitability analysis. This research selects the rules of combination from the logical combination. After this process, it can be classified into three suitability levels. Then, three suitability levels will input to the physiographic inundation model for simulating the evaluation of flood-detention respectively. This study tries to respond the urgent issue in urban area and establishes a model of multi-use for public facilities land as flood-detention through the systematic research process of this study. The result of this study can tell which combination of the suitability level is more efficacious. Besides, The model is not only standing on the side of urban planners but also add in the point of view from water agency. Those findings may serve as basis for land use indicators and decision-making references for concerned government agencies.Keywords: flooding management strategy, land use suitability analysis, multi-use for public facilities land, physiographic inundation model
Procedia PDF Downloads 357293 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations
Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu
Abstract:
Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior
Procedia PDF Downloads 104