Search results for: least-squares estimation
503 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)
Procedia PDF Downloads 222502 Bacteriological and Mineral Analyses of Leachate Samples from Erifun Dumpsite, Ado-Ekiti, Ekiti State, Nigeria
Authors: Adebowale T. Odeyemi, Oluwafemi A. Ajenifuja
Abstract:
The leachate samples collected from Erifun dumpsite along Federal Polythenic road, Ado-Ekiti, Ekiti State, were subjected to bacteriological and mineral analyses. The bacteriological estimation and isolation were done using serial dilution and pour plating techniques. Antibiotic susceptibility test was done using agar disc diffusion technique. Atomic Absorption Spectophotometry method was used to analyze the heavy metal contents in the leachate samples. The bacterial and coliform counts ranged from 4.2 × 105 CFU/ml to 2.97 × 106 CFU/ml and 5.0 × 104 CFU/ml to 2.45 x 106 CFU/ml, respectively. The isolated bacteria and percentage of occurrence include Bacillus cereus (22%), Enterobacter aerogenes (18%), Staphylococcus aureus (16%), Proteus vulgaris (14%), Escherichia coli (14%), Bacillus licheniformis (12%) and Klebsiella aerogenes (4%). The mineral value ranged as follow; iron (21.30mg/L - 25.60mg/L), zinc (1.80mg/L - 5.60mg/L), copper (1.00mg/L - 2.60mg/L), chromium (0.50mg/L - 1.30mg/L), candium (0.20mg/L - 1.30mg/L), nickel (0.20mg/L - 0.80mg/L), lead (0.05mg/L-0.30mg/L), cobalt (0.03mg/L - 0.30mg/L) and in all samples manganese was not detected. The entire organisms isolated exhibited a high level of resistance to most of the antibiotics used. There is an urgent need for awareness to be created about the present situation of the leachate in Erifun, on the need for treatment of the nearby stream and other water sources before they can be used for drinking and other domestic use. In conclusion, a good method of waste disposal is required in those communities to prevent leachate formation, percolation, and runoff into water bodies during the raining season.Keywords: antibiotic susceptibility, dumpsite, bacteriological analysis, heavy metal
Procedia PDF Downloads 142501 Analysis of Tourism Development Level and Research on Improvement Strategies - Take Chongqing as an Example
Abstract:
As a member of the tertiary industry, tourism is an important driving factor for urban economic development. As a well-known tourist city in China, according to statistics, the added value of tourism and related industries in 2022 will reach 106.326 billion yuan, a year-on-year increase of 1.2%, accounting for 3.7% of the city's GDP. However, the overall tourism development level of Chongqing is seriously unbalanced, and the tourism strength of the main urban area is much higher than that of the southeast Chongqing, northeast Chongqing and the surrounding city tourism area, and the overall tourism strength of the other three regions is relatively balanced. Based on the estimation of tourism development level and the geographic detector method, this paper finds that the important factors affecting the tourism development level of non-main urban areas in Chongqing are A-level tourist attractions. Through GIS geospatial analysis technology and SPSS data correlation research method, the spatial distribution characteristics and influencing factors of A-level tourist attractions in Chongqing were quantitatively analyzed by using data such as geospatial data cloud, relevant documents of Chongqing Municipal Commission of Culture and Tourism Development, planning cloud, and relevant statistical yearbooks. The results show that: (1) The spatial distribution of tourist attractions in non-main urban areas of Chongqing is agglomeration and uneven. (2) The spatial distribution of A-level tourist attractions in non-main urban areas of Chongqing is affected by ecological factors, and the degree of influence is in the order of water factors> topographic factors > green space factors.Keywords: tourist attractions, geographic detectors, quantitative research, ecological factors, GIS technology, SPSS analysis
Procedia PDF Downloads 18500 Estimation of Carbon Uptake of Seoul City Street Trees in Seoul and Plans for Increase Carbon Uptake by Improving Species
Authors: Min Woo Park, Jin Do Chung, Kyu Yeol Kim, Byoung Uk Im, Jang Woo Kim, Hae Yeul Ryu
Abstract:
Nine representative species of trees among all the street trees were selected to estimate the absorption amount of carbon dioxide emitted from street trees in Seoul calculating the biomass, amount of carbon saved, and annual absorption amount of carbon dioxide in each of the species. Planting distance of street trees in Seoul was 1,851,180 m, the number of planting lines was 1,287, the number of planted trees was 284,498 and 46 species of trees were planted as of 2013. According to the result of plugging the quantity of species of street trees in Seoul on the absorption amount of each of the species, 120,097 ton of biomass, 60,049.8 ton of amount of carbon saved, and 11,294 t CO2/year of annual absorption amount of carbon dioxide were calculated. Street ratio mentioned on the road statistics in Seoul in 2022 is 23.13%. If the street trees are assumed to be increased in the same rate, the number of street trees in Seoul was calculated to be 294,823. The planting distance was estimated to be 1,918,360 m, and the annual absorption amount of carbon dioxide was measured to be 11,704 t CO2/year. Plans for improving the annual absorption amount of carbon dioxide from street trees were established based on the expected amount of absorption. First of all, it is to improve the annual absorption amount of carbon dioxide by increasing the number of planted street trees after adjusting the planting distance of street trees. If adjusting the current planting distance to 6 m, it was turned out that 12,692.7 t CO2/year was absorbed on an annual basis. Secondly, it is to change the species of trees to tulip trees that represent high absorption rate. If increasing the proportion of tulip trees to 30% up to 2022, the annual absorption rate of carbon dioxide was calculated to be 17804.4 t CO2/year.Keywords: absorption of carbon dioxide, source of absorbing carbon dioxide, trees in city, improving species
Procedia PDF Downloads 363499 Analysis of Earthquake Potential and Shock Level Scenarios in South Sulawesi
Authors: Takhul Bakhtiar
Abstract:
In South Sulawesi Province, there is an active Walanae Fault causing this area to frequently experience earthquakes. This study aims to determine the level of seismicity of the earthquake in order to obtain the potential for earthquakes in the future. The estimation of the potential for earthquakes is then made a scenario model determine the estimated level of shocks as an effort to mitigate earthquake disasters in the region. The method used in this study is the Gutenberg Richter Method through the statistical likelihood approach. This study used earthquake data in the South Sulawesi region in 1972 - 2022. The research location is located at the coordinates of 3.5° – 5.5° South Latitude and 119.5° – 120.5° East Longitude and divided into two segments, namely the northern segment at the coordinates of 3.5° – 4.5° South Latitude and 119,5° – 120,5° East Longitude then the southern segment with coordinates of 4.5° – 5.5° South Latitude and 119,5° – 120.5° East Longitude. This study uses earthquake parameters with a magnitude > 1 and a depth < 50 km. The results of the analysis show that the potential for earthquakes in the next ten years with a magnitude of M = 7 in the northern segment is estimated at 98.81% with an estimated shock level of VI-VII MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng then IV - V MMI in the cities of Bulukumba, Selayar, Makassar and Gowa. In the southern segment, the potential for earthquakes in the next ten years with a magnitude of M = 7 is estimated at 32.89% with an estimated VI-VII MMI shock level in the cities of Bulukumba, Selayar, Makassar and Gowa, then III-IV MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng.Keywords: Gutenberg Richter, likelihood method, seismicity, shakemap and MMI scale
Procedia PDF Downloads 120498 Aerodynamic Modeling Using Flight Data at High Angle of Attack
Authors: Rakesh Kumar, A. K. Ghosh
Abstract:
The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling
Procedia PDF Downloads 447497 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways
Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi
Abstract:
The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore a single PCE-value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (the distance from the rear bumper of a leading vehicle to the rear bumper of the following vehicle) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-least-squares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.Keywords: level of service, capacity analysis, lagging headway, trucks
Procedia PDF Downloads 357496 Artificial intelligence and Law
Authors: Mehrnoosh Abouzari, Shahrokh Shahraei
Abstract:
With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.Keywords: artificial intelligence, law, intelligent system, judge
Procedia PDF Downloads 119495 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form
Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry
Abstract:
Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.Keywords: hydroxamic acid, related substances, UPLC, valaciclovir
Procedia PDF Downloads 247494 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 202493 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis
Procedia PDF Downloads 131492 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis
Authors: Syed Toqueer Akhter, Hussain Hamid
Abstract:
Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model
Procedia PDF Downloads 1030491 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake
Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina
Abstract:
TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.Keywords: b-value, moment tensor, seismotectonics, source parameters
Procedia PDF Downloads 314490 Estimation of Hysteretic Damping in Steel Dual Systems with Buckling Restrained Brace and Moment Resisting Frame
Authors: Seyed Saeid Tabaee, Omid Bahar
Abstract:
Nowadays, using energy dissipation devices has been commonly used in structures. A high rate of energy absorption during earthquakes is the benefit of using such devices, which results in damage reduction of structural elements specifically columns. The hysteretic damping capacity of energy dissipation devices is the key point that it may adversely complicate analysis and design of such structures. This effect may be generally represented by equivalent viscous damping. The equivalent viscous damping may be obtained from the expected hysteretic behavior under the design or maximum considered displacement of a structure. In this paper, the hysteretic damping coefficient of a steel moment resisting frame (MRF), which its performance is enhanced by a buckling restrained brace (BRB) system has been evaluated. Having the foresight of damping fraction between BRB and MRF is inevitable for seismic design procedures like Direct Displacement-Based Design (DDBD) method. This paper presents an approach to calculate the damping fraction for such systems by carrying out the dynamic nonlinear time history analysis (NTHA) under harmonic loading, which is tuned to the natural frequency of the system. Two steel moment frame structures, one equipped with BRB, and the other without BRB are simultaneously studied. The extensive analysis shows that proportion of each system damping fraction may be calculated by its shear story portion. In this way, the contribution of each BRB in the floors and their general contribution in the structural performance may be clearly recognized, in advance.Keywords: buckling restrained brace, direct displacement based design, dual systems, hysteretic damping, moment resisting frames
Procedia PDF Downloads 434489 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks
Authors: Emad A. Mohammed
Abstract:
The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.Keywords: permeability, hydraulic flow units, artificial intelligence, correlation
Procedia PDF Downloads 138488 Thermal Effects on Wellbore Stability and Fluid Loss in High-Temperature Geothermal Drilling
Authors: Mubarek Alpkiray, Tan Nguyen, Arild Saasen
Abstract:
Geothermal drilling operations contain numerous challenges that are encountered to increase the well cost and nonproductive time. Fluid loss is one of the most undesirable troublesome that can cause well abandonment in geothermal drilling. Lost circulation can be seen due to natural fractures, high mud weight, and extremely high formation temperatures. This challenge may cause wellbore stability problems and lead to expensive drilling operations. Wellbore stability is the main domain that should be considered to mitigate or prevent fluid loss into the formation. This paper describes the causes of fluid loss in the Pamukoren geothermal field in Turkey. A geomechanics approach integration and assessment is applied to help the understanding of fluid loss problems. In geothermal drillings, geomechanics is primarily based on rock properties, in-situ stress characterization, the temperature of the rock, determination of stresses around the wellbore, and rock failure criteria. Since a high-temperature difference between the wellbore wall and drilling fluid is presented, temperature distribution through the wellbore is estimated and implemented to the wellbore stability approach. This study reviewed geothermal drilling data to analyze temperature estimation along the wellbore, the cause of fluid loss and stored electric capacity of the reservoir. Our observation demonstrates the geomechanical approach's significant role in understanding safe drilling operations on high-temperature wells. Fluid loss is encountered due to thermal stress effects around the borehole. This paper provides a wellbore stability analysis for a geothermal drilling operation to discuss the causes of lost circulation resulting in nonproductive time and cost.Keywords: geothermal wells, drilling, wellbore stresses, drilling fluid loss, thermal stress
Procedia PDF Downloads 197487 Radiation Annealing of Radiation Embrittlement of the Reactor Pressure Vessel
Authors: E. A. Krasikov
Abstract:
Influence of neutron irradiation on RPV steel degradation are examined with reference to the possible reasons of the substantial experimental data scatter and furthermore – nonstandard (non-monotonous) and oscillatory embrittlement behavior. In our glance, this phenomenon may be explained by presence of the wavelike component in the embrittlement kinetics. We suppose that the main factor affecting steel anomalous embrittlement is fast neutron intensity (dose rate or flux), flux effect manifestation depends on state-of-the-art fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Data on radiation damage change including through the ex-service RPVs taking into account chemical factor, fast neutron fluence and neutron flux were obtained and analyzed. In our opinion, controversy in the estimation on neutron flux on radiation degradation impact may be explained by presence of the wavelike component in the embrittlement kinetics. Therefore, flux effect manifestation depends on fluence level. At low fluencies, radiation degradation has to exceed normative value, then approaches to normative meaning and finally became sub normative. Moreover as a hypothesis we suppose that at some stages of irradiation damaged metal have to be partially restored by irradiation i.e. neutron bombardment. Nascent during irradiation structure undergo occurring once or periodically transformation in a direction both degradation and recovery of the initial properties. According to our hypothesis, at some stage(s) of metal structure degradation neutron bombardment became recovering factor. As a result, oscillation arises that in turn leads to enhanced data scatter.Keywords: annealing, embrittlement, radiation, RPV steel
Procedia PDF Downloads 342486 Determination and Distribution of Formation Thickness Using Seismic and Well Data in Baga/Lake Sub-basin, Chad Basin Nigeria
Authors: Gabriel Efomeh Omolaiye, Olatunji Seminu, Jimoh Ajadi, Yusuf Ayoola Jimoh
Abstract:
The Nigerian part of the Chad Basin till date has been one of the few critically studied basins, with few published scholarly works, compared to other basins such as Niger Delta, Dahomey, etc. This work was undertaken by the integration of 3D seismic interpretations and the well data analysis of eight wells fairly distributed in block A, Baga/Lake sub-basin in Borno basin with the aim of determining the thickness of Chad, Kerri-Kerri, Fika, and Gongila Formations in the sub-basin. Da-1 well (type-well) used in this study was subdivided into stratigraphic units based on the regional stratigraphic subdivision of the Chad basin and was later correlated with other wells using similarity of observed log responses. The combined density and sonic logs were used to generate synthetic seismograms for seismic to well ties. Five horizons were mapped, representing the tops of the formations on the 3D seismic data covering the block; average velocity function with maximum error/residual of 0.48% was adopted in the time to depth conversion of all the generated maps. There is a general thickening of sediments from the west to the east, and the estimated thicknesses of the various formations in the Baga/Lake sub-basin are Chad Formation (400-750 m), Kerri-Kerri Formation (300-1200 m), Fika Formation (300-2200 m) and Gongila Formation (100-1300 m). The thickness of the Bima Formation could not be established because the deepest well (Da-1) terminates within the formation. This is a modification to the previous and widely referenced studies of over forty decades that based the estimation of formation thickness within the study area on the observed outcrops at different locations and the use of few well data.Keywords: Baga/Lake sub-basin, Chad basin, formation thickness, seismic, velocity
Procedia PDF Downloads 190485 Downtime Modelling for the Post-Earthquake Building Assessment Phase
Authors: S. Khakurel, R. P. Dhakal, T. Z. Yeow
Abstract:
Downtime is one of the major sources (alongside damage and injury/death) of financial loss incurred by a structure in an earthquake. The length of downtime associated with a building after an earthquake varies depending on the time taken for the reaction (to the earthquake), decision (on the future course of action) and execution (of the decided course of action) phases. Post-earthquake assessment of buildings is a key step in the decision making process to decide the appropriate safety placarding as well as to decide whether a damaged building is to be repaired or demolished. The aim of the present study is to develop a model to quantify downtime associated with the post-earthquake building-assessment phase in terms of two parameters; i) duration of the different assessment phase; and ii) probability of different colour tagging. Post-earthquake assessment of buildings includes three stages; Level 1 Rapid Assessment including a fast external inspection shortly after the earthquake, Level 2 Rapid Assessment including a visit inside the building and Detailed Engineering Evaluation (if needed). In this study, the durations of all three assessment phases are first estimated from the total number of damaged buildings, total number of available engineers and the average time needed for assessing each building. Then, probability of different tag colours is computed from the 2010-11 Canterbury earthquake Sequence database. Finally, a downtime model for the post-earthquake building inspection phase is proposed based on the estimated phase length and probability of tag colours. This model is expected to be used for rapid estimation of seismic downtime within the Loss Optimisation Seismic Design (LOSD) framework.Keywords: assessment, downtime, LOSD, Loss Optimisation Seismic Design, phase length, tag color
Procedia PDF Downloads 185484 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds
Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott
Abstract:
Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)
Procedia PDF Downloads 372483 A Tool for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: digital information management, file format, endangerment analysis, fuzzy models
Procedia PDF Downloads 406482 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures
Authors: Hamid Kazemi, Abbasali Sadeghi
Abstract:
Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame
Procedia PDF Downloads 292481 Functionalized Magnetic Iron Oxide Nanoparticles for Extraction of Protein and Metal Nanoparticles from Complex Fluids
Authors: Meenakshi Verma, Mandeep Singh Bakshi, Kultar Singh
Abstract:
Magnetic nanoparticles have received incredible importance in view of their diverse applications, which arise primarily due to their response to the external magnetic field. The magnetic behaviour of magnetic nanoparticles (NPs) helps them in numerous different ways. The most important amongst them is the ease with which they can be purified and also can be separated from the media in which they are present merely by applying an external magnetic field. This exceptional ease of separation of the magnetic NPs from an aqueous media enables them to use for extracting/removing metal pollutants from complex aqueous medium. Functionalized magnetic NPs can be subjected for the metallic impurities extraction if are favourably adsorbed on the NPs surfaces. We have successfully used the magnetic NPs as vehicles for gold and silver NPs removal from the complex fluids. The NPs loaded with gold and silver NPs pollutant fractions has been easily removed from the aqueous media by using external magnetic field. Similarly, we have used the magnetic NPs for extraction of protein from complex media and then constantly washed with pure water to eliminate the unwanted surface adsorbed components for quantitative estimation. The purified and protein loaded magnetic NPs are best analyzed with SDS Page to not only for characterization but also for separating the protein fractions. A collective review of the results indicates that we have synthesized surfactant coated iron oxide NPs and then functionalized these with selected materials. These surface active magnetic NPs work very well for the extraction of metallic NPs from the aqueous bulk and make the whole process environmentally sustainable. Also, magnetic NPs-Au/Ag/Pd hybrids have excellent protein extracting properties. They are much easier to use in order to extract the magnetic impurities as well as protein fractions under the effect of external magnetic field without any complex conventional purification methods.Keywords: magnetic nanoparticles, protein, functionalized, extraction
Procedia PDF Downloads 102480 Evaluation of the Total Antioxidant Capacity and Total Phenol Content of the Wild and Cultivated Variety of Aegle Marmelos (L) Correa Leaves Used in the Treatment of Diabetes
Authors: V. Nigam, V. Nambiar
Abstract:
Aegle Marmelos leaf has been used as a remedy for various gastrointestinal infections and lowering blood sugar level in traditional system of medicine in India due to the presence of various constituents such as flavonoids, tannins and alkaloids (eg. Aegelin, Marmelosin, Luvangetin).The objective of the present study was to evaluate the total antioxidant activity, total and individual phenol content of the wild and cultivated variety of Aegle marmelos leaves to assess the role of this plant in ethanomedicine in India. The methanolic extracts of the leaves were screened for total antioxidant capacity through Ferric Reducing Antioxidant Potential (FRAP) and 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assay; Total Phenol content (TPC) through spectrophotometric technique based on Folin Ciocalteau assay and for qualitative estimation of phenols, High performance Liquid Chromatography was used. The TPC of wild and cultivated variety was 7.6% and 6.5% respectively whereas HPLC analysis for quantification of individual polyphenol revealed the presence of gallic acid, chlorogenic acid and Ferullic acid in wild variety whereas gallic acid, Ferullic acid and pyrocatechol in cultivated variety. FRAP values and IC 50 value (DPPH) for wild and cultivated variety was 14.65 μmol/l and 11.80μmol/l; 437 μg/ml and 620μg/ml respectively and thus it can be used as potential inhibitor of free radicals. The wild variety was having more antioxidant capacity than the cultivated one it can be exploited further for its therapeutic application. As Aegle marmelos is rich in antioxidant, it can be used as food additives to delay the oxidative deterioration of foods and as nutraceutical in medicinal formulation against degenerative diseases like diabetes.Keywords: antioxidant activity, aegle marmelos, antidiabetic, nutraceutical
Procedia PDF Downloads 374479 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 100478 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram
Authors: Mona Hejazi, Ali Motie Nasrabadi
Abstract:
Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG
Procedia PDF Downloads 469477 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements
Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono
Abstract:
The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement
Procedia PDF Downloads 283476 Salting Effect in Partially Miscible Systems of Water/Acétic Acid/1-Butanol at 298.15k: Experimental Study and Estimation of New Solvent-Solvent and Salt-Solvent Binary Interaction Parameters for NRTL Model
Authors: N. Bourayou, A. -H. Meniai, A. Gouaoura
Abstract:
The presence of salt can either raise or lower the distribution coefficient of a solute acetic acid in liquid- liquid equilibria. The coefficient of solute is defined as the ratio of the composition of solute in solvent rich phase to the composition of solute in diluents (water) rich phase. The phenomena are known as salting–out or salting-in, respectively. The effect of monovalent salt, sodium chloride and the bivalent salt, sodium sulfate on the distribution of acetic acid between 1-butanol and water at 298.15K were experimentally shown to be effective in modifying the liquid-liquid equilibrium of water/acetic acid/1-butanol system in favour of the solvent extraction of acetic acid from an aqueous solution with 1-butanol, particularly at high salt concentrations of both salts. All the two salts studied are found to have to salt out effect for acetic acid in varying degrees. The experimentally measured data were well correlated by Eisen-Joffe equation. NRTL model for solvent mixtures containing salts was able to provide good correlation of the present liquid-liquid equilibrium data. Using the regressed salt concentration coefficients for the salt-solvent interaction parameters and the solvent-solvent interaction parameters obtained from the same system without salt. The calculated phase equilibrium was in a quite good agreement with the experimental data, showing the ability of NRTL model to correlate salt effect on the liquid-liquid equilibrium.Keywords: activity coefficient, Eisen-Joffe, NRTL model, sodium chloride
Procedia PDF Downloads 284475 Demographic Diversity in the Boardroom and Firm Performance: Empirical Evidence in the French Context
Authors: Elhem Zaatir, Taher Hamza
Abstract:
Several governments seek to implement gender parity on boards, but the results of doing so are not clear and could harm corporations and economies. The present paper aims to investigate the relationship between women’s presence on boards and firms’ performance in the context of the French listed firms during the quota period. A dynamic panel generalized method of moment estimation is applied to control the endogenous effect of board structure and reverse the causality impact of the financial performance. Our results show that the impact of gender diversity manifests in conflicting directions, positively affecting accounting performance and negatively influencing market performance. These results suggest that female directors create economic value, but the market discounts their impact. Apparently, they are subject to a biased evaluation by the market, which undervalues their presence on boards. Added to that, our results confirm a twofold nature of female representation in the French market. The effect of female directorship on firm performance varies with the affiliation of the directors. In other words, the positive impact of gender diversity on return on assets primarily originates from the positive effect of non-family-affiliated women directors on market performance rather than on the effect of family-affiliated women directors on ROA. Finally, according to our results, women’s demographic attributes namely the level of education and multiple directorships strongly and positively impact firm performance as measured by return on assets (ROA). Obviously, women directors seem to be appointed to the business case rather than as token directors.Keywords: corporate governance, board of directors, women, gender diversity, demographic attributes, firm performance
Procedia PDF Downloads 129474 Estimation of Particle Size Distribution Using Magnetization Data
Authors: Navneet Kaur, S. D. Tiwari
Abstract:
Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism
Procedia PDF Downloads 143