Search results for: AR linear estimation
3723 FRP Bars Spacing Effect on Numerical Thermal Deformations in Concrete Beams under High Temperatures
Authors: A. Zaidi, F. Khelifi, R. Masmoudi, M. Bouhicha
Abstract:
5 In order to eradicate the degradation of reinforced concrete structures due to the steel corrosion, professionals in constructions suggest using fiber reinforced polymers (FRP) for their excellent properties. Nevertheless, high temperatures may affect the bond between FRP bar and concrete, and consequently the serviceability of FRP-reinforced concrete structures. This paper presents a nonlinear numerical investigation using ADINA software to investigate the effect of the spacing between glass FRP (GFRP) bars embedded in concrete on circumferential thermal deformations and the distribution of radial thermal cracks in reinforced concrete beams submitted to high temperature variations up to 60 °C for asymmetrical problems. The thermal deformations predicted from nonlinear finite elements model, at the FRP bar/concrete interface and at the external surface of concrete cover, were established as a function of the ratio of concrete cover thickness to FRP bar diameter (c/db) and the ratio of spacing between FRP bars in concrete to FRP bar diameter (e/db). Numerical results show that the circumferential thermal deformations at the external surface of concrete cover are linear until cracking thermal load varied from 32 to 55 °C corresponding to the ratio of e/db varied from 1.3 to 2.3, respectively. However, for ratios e/db >2.3 and c/db >1.6, the thermal deformations at the external surface of concrete cover exhibit linear behavior without any cracks observed on the specified surface. The numerical results are compared to those obtained from analytical models validated by experimental tests.Keywords: concrete beam, FRP bars, spacing effect, thermal deformation
Procedia PDF Downloads 2023722 Inclined Convective Instability in a Porous Layer Saturated with Non-Newtonian Fluid
Authors: Rashmi Dubey
Abstract:
The study aims at investigating the onset of thermal convection in an inclined porous layer saturated with a non-Newtonian fluid. The layer is infinitely extended and has a finite width confined between two boundaries with constant pressure conditions, where the lower one is maintained at a higher temperature. Over the years, this area of research has attracted many scientists and researchers, for it has a plethora of applications in the fields of sciences and engineering, such as in civil engineering, geothermal sites, petroleum industries, etc.Considering the possibilities in a practical scenario, an inclined porous layer is considered, which can be used to develop a generalized model applicable to any inclination. Using the isobaric boundaries, the hydrodynamic boundary conditions are derived for the power-law model and are used to obtain the basic state flow. The convection in the basic state flow is driven by the thermal buoyancy in the flow system and is carried away further due to hydrodynamic boundaries. A linear stability analysis followed by a normal-mode analysis is done to investigate the onset of convection in the buoyancy-driven flow. The analysis shows that the convective instability is always initiated by the non-traveling modes for the Newtonian fluid, but prevails in the form of oscillatory modes, for up to a certain inclination of the porous layer. However, different behavior is observed for the dilatant and pseudoplastic fluids.Keywords: thermal convection, linear stability, porous media flow, Inclined porous layer
Procedia PDF Downloads 1223721 The Gasoil Hydrofining Kinetics Constants Identification
Authors: C. Patrascioiu, V. Matei, N. Nicolae
Abstract:
The paper describes the experiments and the kinetic parameters calculus of the gasoil hydrofining. They are presented experimental results of gasoil hidrofining using Mo and promoted with Ni on aluminum support catalyst. The authors have adapted a kinetic model gasoil hydrofining. Using this proposed kinetic model and the experimental data they have calculated the parameters of the model. The numerical calculus is based on minimizing the difference between the experimental sulf concentration and kinetic model estimation.Keywords: hydrofining, kinetic, modeling, optimization
Procedia PDF Downloads 4333720 Patient Reported Outcome Measures Post Implant Based Reconstruction Basildon Hospital
Authors: Danny Fraser, James Zhang
Abstract:
Aim of the study: Our study aims to identify any statistically significant evidence as it relates to PROMs for mastectomy and implant-based reconstruction to guide future surgical management. Method: The demographic, pre and post-operative treatment and implant characteristics were collected of all patients at Basildon hospital who underwent breast reconstruction from 2017-2023. We used the Breast-Q psychosocial well-being, physical well-being, and satisfaction with breasts scales. An Independent t-test was conducted for each group, and linear regression of age and implant size. Results: 69 patients were contacted, and 39 PROMs returned. The mean age of patients was 57.6. 40% had smoked before, and 40.8% had BMI>30. 29 had pre-pectoral placement, and 40 had subpectoral placement. 17 had smooth implants, and 52 textured. Sub pectoral placement was associated with higher (75.7 vs. 61.9 p=0.046) psychosocial scores than pre pectoral, and textured implants were associated with a lower physical score than the smooth surface (34.7 VS 50.2 P=0.046). On linear regression, age was positively associated (p=0.007) with psychosocial score. Conclusion: We present a large cohort of patients who underwent breast reconstruction. Understanding the PROMs of these procedures can guide clinicians, patients and policy makers to be more informed of the course of rehabilitation of these operations. Significance: We have found that from a patient perspective subpectoral implant placement was associated with a statistically significant improvement in psychosocial scores.Keywords: breast surgery, mastectomy, breast implants, oncology
Procedia PDF Downloads 593719 Estimation and Forecasting Debris Flow Phenomena on the Highway of the 'TRACECA' Corridor
Authors: Levan Tsulukidze
Abstract:
The paper considers debris flow phenomena and forecasting of them in the corridor of ‘TRACECA’ on the example of river Naokhrevistkali, as well as the debris flow -type channel passing between the villages of Vale-2 and Naokhrevi. As a result of expeditionary and reconnaissance investigations, as well as using empiric dependencies, the debris flow expenditure has been estimated in case of different debris flow provisions.Keywords: debris flow, Traceca corridor, forecasting, river Naokhrevistkali
Procedia PDF Downloads 3513718 Deconstructing Local Area Networks Using MaatPeace
Authors: Gerald Todd
Abstract:
Recent advances in random epistemologies and ubiquitous theory have paved the way for web services. Given the current status of linear-time communication, cyberinformaticians compellingly desire the exploration of link-level acknowledgements. In order to realize this purpose, we concentrate our efforts on disconfirming that DHTs and model checking are mostly incompatible.Keywords: LAN, cyberinformatics, model checking, communication
Procedia PDF Downloads 4003717 Simplified Linear Regression Model to Quantify the Thermal Resilience of Office Buildings in Three Different Power Outage Day Times
Authors: Nagham Ismail, Djamel Ouahrani
Abstract:
Thermal resilience in the built environment reflects the building's capacity to adapt to extreme climate changes. In hot climates, power outages in office buildings pose risks to the health and productivity of workers. Therefore, it is of interest to quantify the thermal resilience of office buildings by developing a user-friendly simplified model. This simplified model begins with creating an assessment metric of thermal resilience that measures the duration between the power outage and the point at which the thermal habitability condition is compromised, considering different power interruption times (morning, noon, and afternoon). In this context, energy simulations of an office building are conducted for Qatar's summer weather by changing different parameters that are related to the (i) wall characteristics, (ii) glazing characteristics, (iii) load, (iv) orientation and (v) air leakage. The simulation results are processed using SPSS to derive linear regression equations, aiding stakeholders in evaluating the performance of commercial buildings during different power interruption times. The findings reveal the significant influence of glazing characteristics on thermal resilience, with the morning power outage scenario posing the most detrimental impact in terms of the shortest duration before compromising thermal resilience.Keywords: thermal resilience, thermal envelope, energy modeling, building simulation, thermal comfort, power disruption, extreme weather
Procedia PDF Downloads 723716 Neuron Efficiency in Fluid Dynamics and Prediction of Groundwater Reservoirs'' Properties Using Pattern Recognition
Authors: J. K. Adedeji, S. T. Ijatuyi
Abstract:
The application of neural network using pattern recognition to study the fluid dynamics and predict the groundwater reservoirs properties has been used in this research. The essential of geophysical survey using the manual methods has failed in basement environment, hence the need for an intelligent computing such as predicted from neural network is inevitable. A non-linear neural network with an XOR (exclusive OR) output of 8-bits configuration has been used in this research to predict the nature of groundwater reservoirs and fluid dynamics of a typical basement crystalline rock. The control variables are the apparent resistivity of weathered layer (p1), fractured layer (p2), and the depth (h), while the dependent variable is the flow parameter (F=λ). The algorithm that was used in training the neural network is the back-propagation coded in C++ language with 300 epoch runs. The neural network was very intelligent to map out the flow channels and detect how they behave to form viable storage within the strata. The neural network model showed that an important variable gr (gravitational resistance) can be deduced from the elevation and apparent resistivity pa. The model results from SPSS showed that the coefficients, a, b and c are statistically significant with reduced standard error at 5%.Keywords: gravitational resistance, neural network, non-linear, pattern recognition
Procedia PDF Downloads 2113715 Smallholder’s Agricultural Water Management Technology Adoption, Adoption Intensity and Their Determinants: The Case of Meda Welabu Woreda, Oromia, Ethiopia
Authors: Naod Mekonnen Anega
Abstract:
The very objective of this paper was to empirically identify technology tailored determinants to the adoption and adoption intensity (extent of use) of agricultural water management technologies in Meda Welabu Woreda, Oromia regional state, Ethiopia. Meda Welabu Woreda which is one of the administrative Woredas of the Oromia regional state was selected purposively as this Woreda is one of the Woredas in the region where small scale irrigation practices and the use of agricultural water management technologies can be found among smallholders. Using the existence water management practices (use of water management technologies) and land use pattern as a criterion Genale Mekchira Kebele is selected to undergo the study. A total of 200 smallholders were selected from the Kebele using the technique developed by Krejeie and Morgan. The study employed the Logit and Tobit models to estimate and identify the economic, social, geographical, household, institutional, psychological, technological factors that determine adoption and adoption intensity of water management technologies. The study revealed that while 55 of the sampled households are adopters of agricultural water management technology the rest 140 were non adopters of the technologies. Among the adopters included in the sample 97% are using river diversion technology (traditional) with traditional canal while the rest 7% percent are using pond with treadle pump technology. The Logit estimation reveled that while adoption of river diversion is positively and significantly affected by membership to local institutions, active labor force, income, access to credit and land ownership, adoption of treadle pump technology is positively and significantly affected by family size, education level, access to credit, extension contact, income, access to market, and slope. The Logit estimation also revealed that whereas, group action requirement, distance to farm, and size of active labor force negative and significantly influenced adoption of river diversion, age and perception has negatively and significantly influenced adoption decision of treadle pump technology. On the other hand, the Tobit estimation reveled that while adoption intensity (extent of use) of agricultural water management is positively and significantly affected by education, credit, and extension contact, access to credit, access to market and income. This study revealed that technology tailored study on adoption of Agricultural water management technologies (AWMTs) should be considered to indentify and scale up best agricultural water management practices. In fact, in countries like Ethiopia, where there is difference in social, economic, cultural, environmental and agro ecological conditions even within the same Kebele technology tailored study that fit the condition of each Kebele would help to identify and scale up best practices in agricultural water management.Keywords: water management technology, adoption, adoption intensity, smallholders, technology tailored approach
Procedia PDF Downloads 4523714 Platform Virtual for Joint Amplitude Measurement Based in MEMS
Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez
Abstract:
Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation
Procedia PDF Downloads 2593713 A Theoretical Study on Pain Assessment through Human Facial Expresion
Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee
Abstract:
A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)
Procedia PDF Downloads 3453712 Estimating the Impact of Appliance Energy Efficiency Improvement on Residential Energy Demand in Tema City, Ghana
Authors: Marriette Sakah, Samuel Gyamfi, Morkporkpor Delight Sedzro, Christoph Kuhn
Abstract:
Ghana is experiencing rapid economic development and its cities command an increasingly dominant role as centers of both production and consumption. Cities run on energy and are extremely vulnerable to energy scarcity, energy price escalations and health impacts of very poor air quality. The overriding concern in Ghana and other West African states is bridging the gap between energy demand and supply. Energy efficiency presents a cost-effective solution for supply challenges by enabling more coverage with current power supply levels and reducing the need for investment in additional generation capacity and grid infrastructure. In Ghana, major issues for energy policy formulation in residential applications include lack of disaggregated electrical energy consumption data and lack of thorough understanding with regards to socio-economic influences on energy efficiency investment. This study uses a bottom up approach to estimate baseline electricity end-use as well as the energy consumption of best available technologies to enable estimation of energy-efficiency resource in terms of relative reduction in total energy use for Tema city, Ghana. A ground survey was conducted to assess the probable consumer behavior in response to energy efficiency initiatives to enable estimation of the amount of savings that would occur in response to specific policy interventions with regards to funding and incentives provision targeted at households. Results show that 16% - 54% reduction in annual electricity consumption is reasonably achievable depending on the level of incentives provision. The saved energy could supply 10000 - 34000 additional households if the added households use only best available technology. Political support and consumer awareness are necessary to translate energy efficiency resources into real energy savings.Keywords: achievable energy savings, energy efficiency, Ghana, household appliances
Procedia PDF Downloads 2123711 Production Planning for Animal Food Industry under Demand Uncertainty
Authors: Pirom Thangchitpianpol, Suttipong Jumroonrut
Abstract:
This research investigates the distribution of food demand for animal food and the optimum amount of that food production at minimum cost. The data consist of customer purchase orders for the food of laying hens, price of food for laying hens, cost per unit for the food inventory, cost related to food of laying hens in which the food is out of stock, such as fine, overtime, urgent purchase for material. They were collected from January, 1990 to December, 2013 from a factory in Nakhonratchasima province. The collected data are analyzed in order to explore the distribution of the monthly food demand for the laying hens and to see the rate of inventory per unit. The results are used in a stochastic linear programming model for aggregate planning in which the optimum production or minimum cost could be obtained. Programming algorithms in MATLAB and tools in Linprog software are used to get the solution. The distribution of the food demand for laying hens and the random numbers are used in the model. The study shows that the distribution of monthly food demand for laying has a normal distribution, the monthly average amount (unit: 30 kg) of production from January to December. The minimum total cost average for 12 months is Baht 62,329,181.77. Therefore, the production planning can reduce the cost by 14.64% from real cost.Keywords: animal food, stochastic linear programming, aggregate planning, production planning, demand uncertainty
Procedia PDF Downloads 3773710 The Dynamics of a 3D Vibrating and Rotating Disc Gyroscope
Authors: Getachew T. Sedebo, Stephan V. Joubert, Michael Y. Shatalov
Abstract:
Conventional configuration of the vibratory disc gyroscope is based on in-plane non-axisymmetric vibrations of the disc with a prescribed circumferential wave number. Due to the Bryan's effect, the vibrating pattern of the disc becomes sensitive to the axial component of inertial rotation of the disc. Rotation of the vibrating pattern relative to the disc is proportional to the inertial angular rate and is measured by sensors. In the present paper, the authors investigate a possibility of making a 3D sensor on the basis of both in-plane and bending vibrations of the disc resonator. We derive equations of motion for the disc vibratory gyroscope, where both in-plane and bending vibrations are considered. Hamiltonian variational principle is used in setting up equations of motion and the corresponding boundary conditions. The theory of thin shells with the linear elasticity principles is used in formulating the problem and also the disc is assumed to be isotropic and obeys Hooke's Law. The governing equation for a specific mode is converted to an ODE to determine the eigenfunction. The resulting ODE has exact solution as a linear combination of Bessel and Neumann functions. We demonstrate how to obtain an explicit solution and hence the eigenvalues and corresponding eigenfunctions for annular disc with fixed inner boundary and free outer boundary. Finally, the characteristics equations are obtained and the corresponding eigenvalues are calculated. The eigenvalues are used for the calculation of tuning conditions of the 3D disc vibratory gyroscope.Keywords: Bryan’s effect, bending vibrations, disc gyroscope, eigenfunctions, eigenvalues, tuning conditions
Procedia PDF Downloads 3203709 An Assessment of Health Hazards in Urban Communities: A Study of Spatial-Temporal Variations of Dengue Epidemic in Colombo, Sri Lanka
Authors: U. Thisara G. Perera, C. M. Kanchana N. K. Chandrasekara
Abstract:
Dengue is an epidemic which is spread by Aedes Egyptai and Aedes Albopictus mosquitoes. The cases of dengue show a dramatic growth rate of the epidemic in urban and semi urban areas spatially in tropical and sub-tropical regions of the world. Incidence of dengue has become a prominent reason for hospitalization and deaths in Asian countries, including Sri Lanka. During the last decade the dengue epidemic began to spread from urban to semi-urban and then to rural settings of the country. The highest number of dengue infected patients was recorded in Sri Lanka in the year 2016 and the highest number of patients was identified in Colombo district. Together with the commercial, industrial, and other supporting services, the district suffers from rapid urbanization and high population density. Thus, drainage and waste disposal patterns of the people in this area exert an additional pressure to the environment. The district is situated in the wet zone and thus low lying lands constitute the largest portion of the district. This situation additionally facilitates mosquito breeding sites. Therefore, the purpose of the present study was to assess the spatial and temporal distribution patterns of dengue epidemic in Kolonnawa MOH area (Medical Officer of Health) in the district of Colombo. The study was carried out using 615 recorded dengue cases in Kollonnawa MOH area during the south east monsoon season from May to September 2016. The Moran’s I and Kernel density estimation were used as analytical methods. The analysis of data was accomplished through the integrated use of ArcGIS 10.1 software packages along with Microsoft Excel analytical tool. Field observation was also carried out for verification purposes during the study period. Results of the Moran’s I index indicates that the spatial distribution of dengue cases showed a cluster distribution pattern across the area. Kernel density estimation emphasis that dengue cases are high where the population has gathered, especially in areas comprising housing schemes. Results of the Kernel Density estimation further discloses that hot spots of dengue epidemic are located in the western half of the Kolonnawa MOH area, which is close to the Colombo municipal boundary and there is a significant relationship with high population density and unplanned urban land use practices. Results of the field observation confirm that the drainage systems in these areas function poorly and careless waste disposal methods of the people further encourage mosquito breeding sites. This situation has evolved harmfully from a public health issue to a social problem, which ultimately impacts on the economy and social lives of the country.Keywords: Dengue epidemic, health hazards, Kernel density, Moran’s I, Sri Lanka
Procedia PDF Downloads 3003708 The Use of Geographic Information System and Spatial Statistic for Analyzing Leukemia in Kuwait for the Period of 2006-2012
Authors: Muhammad G. Almatar, Mohammad A. Alnasrallah
Abstract:
This research focuses on the study of three main issues: 1) The temporal analysis of leukemia for a period of six years (2006-2012), 2) spatial analysis by investigating this phenomenon in the Kuwaiti society spatially in the residential areas within the six governorates, 3) the use of Geographic Information System technology in investigating the hypothesis of the research and its variables using the linear regression, to show the pattern of linear relationship. The study depends on utilizing the map to understand the distribution of blood cancer in Kuwait. Several geodatabases were created for the number of patients and air pollution. Spatial interpolation models were used to generate layers of air pollution in the study area. These geodatabases were tested over the past six years to reach the conclusion: Is there a relationship with significant significance between the two main variables of the study: blood cancer and air pollution? This study is the first to our best knowledge. As far as the researchers know, the distribution of this disease has not been studied geographically at the level of regions in Kuwait within six years and in specific areas as described above. This study investigates the concentration of this type of disease. The study found that there is no relationship of significant value between the two variables studied, and this may be due to the nature of the disease, which are often hereditary. On the other hand, this study has reached a number of suggestions and recommendations that may be useful to decision-makers and interested in the study of leukemia in Kuwait by focusing on the study of genetic diseases, which may be a cause of leukemia rather than air pollution.Keywords: Kuwait, GIS, cancer, geography
Procedia PDF Downloads 1133707 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation
Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber
Abstract:
Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.Keywords: indoor power line, fault location, fault map trace, series arc fault
Procedia PDF Downloads 1373706 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review
Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon
Abstract:
The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration
Procedia PDF Downloads 973705 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1883704 Variations of Total Electron Content over High Latitude Region during the 24th Solar Cycle
Authors: Arun Kumar Singh, Rupesh M. Das, Shailendra Saini
Abstract:
The effect of solar cycle and seasons on the total electron content has been investigated over high latitude region during 24th solar cycle (2010-2014). The total electron content data has been observed with the help of Global Ionospheric Scintillation and TEC monitoring (GISTM) system installed at Indian permanent scientific 'Maitri station' [70˚46’00”S 11˚43’56” E]. The dependence of TEC over a solar cycle has been examined by the performing linear regression analysis between the vertical total electron content (VTEC) and daily total sunspot numbers (SSN). It has been found that the season and level of geomagnetic activity has a considerable effect on the VTEC. It is observed that the VTEC and SSN follow better agreement during summer seasons as compared to winter and equinox seasons and extraordinary agreement during minimum phase (during the year 2010) of the solar cycle. There is a significant correlation between VTEC and SSN during quiet days of the years as compared to overall days of the years (2010-2014). Further, saturation effect has been observed during maximum phase (during the year 2014) of the 24th solar cycle. It is also found that Ap index and SSN has a linear correlation (R=0.37) and the most of the geomagnetic activity occurs during the declining phase of the solar cycle.Keywords: high latitude ionosphere, sunspot number, correlation, vertical total electron content
Procedia PDF Downloads 1923703 Compact Optical Sensors for Harsh Environments
Authors: Branislav Timotijevic, Yves Petremand, Markus Luetzelschwab, Dara Bayat, Laurent Aebi
Abstract:
Optical miniaturized sensors with remote readout are required devices for the monitoring in harsh electromagnetic environments. As an example, in turbo and hydro generators, excessively high vibrations of the end-windings can lead to dramatic damages, imposing very high, additional service costs. A significant change of the generator temperature can also be an indicator of the system failure. Continuous monitoring of vibrations, temperature, humidity, and gases is therefore mandatory. The high electromagnetic fields in the generators impose the use of non-conductive devices in order to prevent electromagnetic interferences and to electrically isolate the sensing element to the electronic readout. Metal-free sensors are good candidates for such systems since they are immune to very strong electromagnetic fields and given the fact that they are non-conductive. We have realized miniature optical accelerometer and temperature sensors for a remote sensing of the harsh environments using the common, inexpensive silicon Micro Electro-Mechanical System (MEMS) platform. Both devices show highly linear response. The accelerometer has a deviation within 1% from the linear fit when tested in a range 0 – 40 g. The temperature sensor can provide the measurement accuracy better than 1 °C in a range 20 – 150 °C. The design of other type of sensors for the environments with high electromagnetic interferences has also been discussed.Keywords: optical MEMS, temperature sensor, accelerometer, remote sensing, harsh environment
Procedia PDF Downloads 3643702 Correlations between Wear Rate and Energy Dissipation Mechanisms in a Ti6Al4V–WC/Co Sliding Pair
Authors: J. S. Rudas, J. M. Gutiérrez Cabeza, A. Corz Rodríguez, L. M. Gómez, A. O. Toro
Abstract:
The prediction of the wear rate of rubbing pairs has attracted the interest of many researchers for years. It has been recently proposed that the sliding wear rate can be inferred from the calculation of the energy rate dissipated by the tribological pair. In this paper some of the dissipative mechanisms present in a pin-on-disc configuration are discussed and both analytical and numerical calculations are carried out. Three dissipative mechanisms were studied: First, the energy release due to temperature gradients within the solid; second, the heat flow from the solid to the environment, and third, the energy loss due to abrasive damage of the surface. The Finite Element Method was used to calculate the dynamics of heat transfer within the solid, with the aid of commercial software. Validation the FEM model was assisted by virtual and laboratory experimentation using different operating points (sliding velocity and geometry contact). The materials for the experiments were Ti6Al4V alloy and Tungsten Carbide (WC-Co). The results showed that the sliding wear rate has a linear relationship with the energy dissipation flow. It was also found that energy loss due to micro-cutting is relevant for the system. This mechanism changes if the sliding velocity and pin geometry are modified though the degradation coefficient continues to present a linear behavior. We found that the less relevant dissipation mechanism for all the cases studied is the energy release by temperature gradients in the solid.Keywords: degradation, dissipative mechanism, dry sliding, entropy, friction, wear
Procedia PDF Downloads 5023701 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 513700 Mapping Man-Induced Soil Degradation in Armenia's High Mountain Pastures through Remote Sensing Methods: A Case Study
Authors: A. Saghatelyan, Sh. Asmaryan, G. Tepanosyan, V. Muradyan
Abstract:
One of major concern to Armenia has been soil degradation emerged as a result of unsustainable management and use of grasslands, this in turn largely impacting environment, agriculture and finally human health. Hence, assessment of soil degradation is an essential and urgent objective set out to measure its possible consequences and develop a potential management strategy. Since recently, an essential tool for assessing pasture degradation has been remote sensing (RS) technologies. This research was done with an intention to measure preciseness of Linear spectral unmixing (LSU) and NDVI-SMA methods to estimate soil surface components related to degradation (fractional vegetation cover-FVC, bare soils fractions, surface rock cover) and determine appropriateness of these methods for mapping man-induced soil degradation in high mountain pastures. Taking into consideration a spatially complex and heterogeneous biogeophysical structure of the studied site, we used high resolution multispectral QuickBird imagery of a pasture site in one of Armenia’s rural communities - Nerkin Sasoonashen. The accuracy assessment was done by comparing between the land cover abundance data derived through RS methods and the ground truth land cover abundance data. A significant regression was established between ground truth FVC estimate and both NDVI-LSU and LSU - produced vegetation abundance data (R2=0.636, R2=0.625, respectively). For bare soil fractions linear regression produced a general coefficient of determination R2=0.708. Because of poor spectral resolution of the QuickBird imagery LSU failed with assessment of surface rock abundance (R2=0.015). It has been well documented by this particular research, that reduction in vegetation cover runs in parallel with increase in man-induced soil degradation, whereas in the absence of man-induced soil degradation a bare soil fraction does not exceed a certain level. The outcomes show that the proposed method of man-induced soil degradation assessment through FVC, bare soil fractions and field data adequately reflects the current status of soil degradation throughout the studied pasture site and may be employed as an alternate of more complicated models for soil degradation assessment.Keywords: Armenia, linear spectral unmixing, remote sensing, soil degradation
Procedia PDF Downloads 3263699 X-Ray Dosimetry by a Low-Cost Current Mode Ion Chamber
Authors: Ava Zarif Sanayei, Mustafa Farjad-Fard, Mohammad-Reza Mohammadian-Behbahani, Leyli Ebrahimi, Sedigheh Sina
Abstract:
The fabrication and testing of a low-cost air-filled ion chamber for X-ray dosimetry is studied. The chamber is made of a metal cylinder, a central wire, a BC517 Darlington transistor, a 9V DC battery, and a voltmeter in order to have a cost-effective means to measure the dose. The output current of the dosimeter is amplified by the transistor and then fed to the large internal resistance of the voltmeter, producing a readable voltage signal. The dose-response linearity of the ion chamber is evaluated for different exposure scenarios by the X-ray tube. kVp values 70, 90, and 120, and mAs up to 20 are considered. In all experiments, a solid-state dosimeter (Solidose 400, Elimpex Medizintechnik) is used as a reference device for chamber calibration. Each case of exposure is repeated three times, the voltmeter and Solidose readings are recorded, and the mean and standard deviation values are calculated. Then, the calibration curve, derived by plotting voltmeter readings against Solidose readings, provided a linear fit result for all tube kVps of 70, 90, and 120. A 99, 98, and 100% linear relationship, respectively, for kVp values 70, 90, and 120 are demonstrated. The study shows the feasibility of achieving acceptable dose measurements with a simplified setup. Further enhancements to the proposed setup include solutions for limiting the leakage current, optimizing chamber dimensions, utilizing electronic microcontrollers for dedicated data readout, and minimizing the impact of stray electromagnetic fields on the system.Keywords: dosimetry, ion chamber, radiation detection, X-ray
Procedia PDF Downloads 763698 Nonlinear Finite Element Analysis of Concrete Filled Steel I-Girder Bridge
Authors: Waheed Ahmad Safi, Shunichi Nakamura
Abstract:
Concrete filled steel I-girder (CFIG) bridge was proposed and the bending and shear strength was confirmed by experiments. The area surrounded by the upper and lower flanges and the web is filled with concrete in CFIG, which is used to the intermediate support of a continuous girder. Three-dimensional finite element models were established to simulate the bending and shear behaviors of CFIG and to clarify the load transfer mechanism. Steel plates and filled concrete were modeled as a three-dimensional 8-node solid element and steel reinforcement bars as a three-dimensional 2-node truss element. The elements were mostly divided into the 50 x 50 mm mesh size. The non-linear stress-strain relation is assumed for concrete in compression including the softening effect after the peak, and the stress increases linearly for concrete in tension until concrete cracking but then decreases due to tension stiffening effect. The stress-strain relation for steel plates was tri-linear and that for reinforcements was bi-linear. The concrete and the steel plates were rigidly connected. The developed FEM model was applied to simulate and analysis the bending behaviors of the CFIG specimens. The vertical displacements and the strains of steel plates and the filled concrete obtained by FEM agreed very well with the test results until the yield load. The specimens collapsed when the upper flange buckled or the concrete spalled off. These phenomena cannot be properly analyzed by FEM, which produces a small discrepancy at the ultimate states. The FEM model was also applied to simulate and analysis the shear tests of the CFIG specimens. The vertical displacements and strains of steel and concrete calculated by FEM model agreed well with the test results. A truss action was confirmed by the FEM and the experiment, clarifying that shear forces were mainly resisted by the tension strut of the steel plate and the compression strut of the filled concrete acting in the diagonal direction. A trail design with the CFIG was carried out for a four-span continuous highway bridge and the design method was established. Construction cost was estimated about 12% lower than that of a conventional steel I-section girder.Keywords: concrete filled steel I-girder, bending strength, FEM, limit states design, steel I-girder, shear strength
Procedia PDF Downloads 2173697 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 3853696 Fault Analysis of Ship Power System Comprising of Parallel Generators and Variable Frequency Drive
Authors: Umair Ashraf, Kjetil Uhlen, Sverre Eriksen, Nadeem Jelani
Abstract:
Although advancement in technology has increased the reliability and ease of work in ship power system, but these advancements are also adding complexities. Ever increasing non linear loads, like power electronics (PE) devices effect the stability of the system. Frequent load variations and complex load dynamics are due to the frequency converters and motor drives, these problem are more prominent when system is connected with the weak grid. In the ship power system major consumers are thruster motors for the propulsion. For the control operation of these motors variable frequency drives (VFD) are used, mostly VFDs operate on nominal voltage of the system. Some of the consumers in ship operate on lower voltage than nominal, these consumers got supply through step down transformers. In this paper the vector control scheme is used for the control of both rectifier and inverter, parallel operation of the synchronous generators is also demonstrated. The simulation have been performed with induction motor as load on VFD and parallel RLC load. Fault analysis has been performed first for the system which do not have VFD and then for the system with VFD. Three phase to the ground, single phase to the ground fault were implemented and behavior of the system in both the cases was observed.Keywords: non-linear load, power electronics, parallel operating generators, pulse width modulation, variable frequency drives, voltage source converters, weak grid
Procedia PDF Downloads 5673695 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 363694 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 200