Search results for: bound estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2302

Search results for: bound estimation

1762 A Data Driven Approach for the Degradation of a Lithium-Ion Battery Based on Accelerated Life Test

Authors: Alyaa M. Younes, Nermine Harraz, Mohammad H. Elwany

Abstract:

Lithium ion batteries are currently used for many applications including satellites, electric vehicles and mobile electronics. Their ability to store relatively large amount of energy in a limited space make them most appropriate for critical applications. Evaluation of the life of these batteries and their reliability becomes crucial to the systems they support. Reliability of Li-Ion batteries has been mainly considered based on its lifetime. However, another important factor that can be considered critical in many applications such as in electric vehicles is the cycle duration. The present work presents the results of an experimental investigation on the degradation behavior of a Laptop Li-ion battery (type TKV2V) and the effect of applied load on the battery cycle time. The reliability was evaluated using an accelerated life test. Least squares linear regression with median rank estimation was used to estimate the Weibull distribution parameters needed for the reliability functions estimation. The probability density function, failure rate and reliability function under each of the applied loads were evaluated and compared. An inverse power model is introduced that can predict cycle time at any stress level given.

Keywords: accelerated life test, inverse power law, lithium-ion battery, reliability evaluation, Weibull distribution

Procedia PDF Downloads 152
1761 Digital Metroliteracies: Space, Diversity and Identity

Authors: Sender Dovchin, Alastair Pennycook

Abstract:

This paper looks at the relationship between online space, urban space and digital literacies. The everyday digital literacy practices of Facebook users (with a particular focus on young urban Mongolians) can be understood as ‘metrolingual’ because of the varied ways in which linguistic and cultural resources, spatial repertoires, and online activities are bound together to make meaning. Whereas the initial development of the term metrolingualism was dependent on a notion of physical urban space, we here argue that the digital practices of these Facebook users perform a range of social and cultural identities (sexual, ethnic, and class-based identities) that are both parts of but also adjacent to the metrolingual fabric.

Keywords: metrolingualism, digital literacy, Mongolia, Facebook

Procedia PDF Downloads 212
1760 The Effect of Artesunate on Myeloperoxidase Activity of Human Polymorphonuclear Neutrophil

Authors: J. B. Minari, O. B. Oloyede, A. A. Odutuga

Abstract:

Myeloperoxidase is the most abundant enzyme found in the polymorphonuclear neutrophil and is known to play a central role in the host defense system of the leukocyte. The enzyme has been reported to interact with some drugs to generate free radical which inhibits its activity. This study investigated the effects of artesunate on the activity of the enzyme and the subsequent effect on the host immune system. In investigating the effects of the drugs on myeloperoxidase, the influence of concentration, pH, partition ratio estimation and kinetics of inhibition were studied. This study showed that artesunate is concentration-dependent inhibitor of myeloperoxidase with an IC50 of 0.078mM. Partition ratio estimation showed that 60 enzymatic turnover cycles are required for complete inhibition of myeloperoxidase in the presence of artesunate. The influence of pH on the effect of artesunate on the enzyme showed least activity of myeloperoxidase at physiological pH. The kinetic inhibition studies showed that artesunate caused a competitive inhibition with an increase in the Km value from 0.12mM to 0.26mM and no effect on the Vmax value. The Ki value was estimated to be 2.5mM. The results obtained from this study show that artesunate is a potent inhibitor of myeloperoxidase and it is capable of inactivating the enzyme. It is considered that the inhibition of myeloperoxidase in the presence of artesunate as revealed in this study may partly explain the impairment of polymorphonuclear neutrophil and consequent reduction of the strength of the host defense system against secondary infections.

Keywords: myeloperoxidase, artesunate, inhibition, nuetrophill

Procedia PDF Downloads 350
1759 Generalized Up-downlink Transmission using Black-White Hole Entanglement Generated by Two-level System Circuit

Authors: Muhammad Arif Jalil, Xaythavay Luangvilay, Montree Bunruangses, Somchat Sonasang, Preecha Yupapin

Abstract:

Black and white holes form the entangled pair⟨BH│WH⟩, where a white hole occurs when the particle moves at the same speed as light. The entangled black-white hole pair is at the center with the radian between the gap. When the speed of particle motion is slower than light, the black hole is gravitational (positive gravity), where the white hole is smaller than the black hole. On the downstream side, the entangled pair appears to have a black hole outside the gap increases until the white holes disappear, which is the emptiness paradox. On the upstream side, when moving faster than light, white holes form times tunnels, with black holes becoming smaller. It will continue to move faster and further when the black hole disappears and becomes a wormhole (Singularity) that is only a white hole in emptiness (Emptiness). This research studies use of black and white holes generated by a two-level circuit for communication transmission carriers, in which high ability and capacity of data transmission can be obtained. The black and white hole pair can be generated by the two-level system circuit when the speech of a particle on the circuit is equal to the speed of light. The black hole forms when the particle speed has increased from slower to equal to the light speed, while the white hole is established when the particle comes down faster than light. They are bound by the entangled pair, signal and idler, ⟨Signal│Idler⟩, and the virtual ones for the white hole, which has an angular displacement of half of π radian. A two-level system is made from an electronic circuit to create black and white holes bound by the entangled bits that are immune or cloning-free from thieves. Start by creating a wave-particle behavior when its speed is equal to light black hole is in the middle of the entangled pair, which is the two bit gate. The required information can be input into the system and wrapped by the black hole carrier. A timeline (Tunnel) occurs when the wave-particle speed is faster than light, from which the entangle pair is collapsed. The transmitted information is safely in the time tunnel. The required time and space can be modulated via the input for the downlink operation. The downlink is established when the particle speed is given by a frequency(energy) form is down and entered into the entangled gap, where this time the white hole is established. The information with the required destination is wrapped by the white hole and retrieved by the clients at the destination. The black and white holes are disappeared, and the information can be recovered and used.

Keywords: cloning free, time machine, teleportation, two-level system

Procedia PDF Downloads 57
1758 Study on Effect of Reverse Cyclic Loading on Fracture Resistance Curve of Equivalent Stress Gradient (ESG) Specimen

Authors: Jaegu Choi, Jae-Mean Koo, Chang-Sung Seok, Byungwoo Moon

Abstract:

Since massive earthquakes in the world have been reported recently, the safety of nuclear power plants for seismic loading has become a significant issue. Seismic loading is the reverse cyclic loading, consisting of repeated tensile and compression by longitudinal and transverse wave. Up to this time, the study on characteristics of fracture toughness under reverse cyclic loading has been unsatisfactory. Therefore, it is necessary to obtain the fracture toughness under reverse cyclic load for the integrity estimation of nuclear power plants under seismic load. Fracture resistance (J-R) curves, which are used for determination of fracture toughness or integrity estimation in terms of elastic-plastic fracture mechanics, can be derived by the fracture resistance test using single specimen technique. The objective of this paper is to study the effects of reverse cyclic loading on a fracture resistance curve of ESG specimen, having a similar stress gradient compared to the crack surface of the real pipe. For this, we carried out the fracture toughness test under the reverse cyclic loading, while changing incremental plastic displacement. Test results showed that the J-R curves were decreased with a decrease of the incremental plastic displacement.

Keywords: reverse cyclic loading, j-r curve, ESG specimen, incremental plastic displacement

Procedia PDF Downloads 369
1757 Evaluation of Earthquake Induced Cost for Mid-Rise Buildings

Authors: Gulsah Olgun, Ozgur Bozdag, Yildirim Ertutar

Abstract:

This paper mainly focuses on performance assessment of buildings by associating the damage level with the damage cost. For this purpose a methodology is explained and applied to the representative mid-rise concrete building residing in Izmir. In order to consider uncertainties in occurrence of earthquakes, the structural analyses are conducted for all possible earthquakes in the region through the hazard curve. By means of the analyses, probability of the structural response being in different limit states are obtained and used to calculate expected damage cost. The expected damage cost comprises diverse cost components related to earthquake such as cost of casualties, replacement or repair cost of building etc. In this study, inter-story drift is used as an effective response variable to associate expected damage cost with different damage levels. The structural analysis methods performed to obtain inter story drifts are response spectrum method as a linear one, accurate push-over and time history methods to demonstrate the nonlinear effects on loss estimation. Comparison of the results indicates that each method provides similar values of expected damage cost. To sum up, this paper explains an approach which enables to minimize the expected damage cost of buildings and relate performance level to damage cost.

Keywords: expected damage cost, limit states, loss estimation, performance based design

Procedia PDF Downloads 252
1756 Human Absorbed Dose Estimation of a New In-111 Imaging Agent Based on Rat Data

Authors: H. Yousefnia, S. Zolghadri

Abstract:

The measurement of organ radiation exposure dose is one of the most important steps to be taken initially, for developing a new radiopharmaceutical. In this study, the dosimetric studies of a novel agent for SPECT-imaging of the bone metastasis, 111In-1,4,7,10-tetraazacyclododecane-1,4,7,10 tetraethylene phosphonic acid (111In-DOTMP) complex, have been carried out to estimate the dose in human organs based on the data derived from rats. The radiolabeled complex was prepared with high radiochemical purity in the optimal conditions. Biodistribution studies of the complex was investigated in the male Syrian rats at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was made based on data derived from the rats by the radiation absorbed dose assessment resource (RADAR) method. 111In-DOTMP complex was prepared with high radiochemical purity of >99% (ITLC). Total body effective absorbed dose for 111In-DOTMP was 0.061 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose with respect to the critical organs is satisfactory within the acceptable range for diagnostic nuclear medicine procedures. Generally, 111In-DOTMP has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastasis in the near future.

Keywords: In-111, DOTMP, Internal Dosimetry, RADAR

Procedia PDF Downloads 390
1755 CPPI Method with Conditional Floor: The Discrete Time Case

Authors: Hachmi Ben Ameur, Jean Luc Prigent

Abstract:

We propose an extension of the CPPI method, which is based on conditional floors. In this framework, we examine in particular the TIPP and margin based strategies. These methods allow keeping part of the past gains and protecting the portfolio value against future high drawdowns of the financial market. However, as for the standard CPPI method, the investor can benefit from potential market rises. To control the risk of such strategies, we introduce both Value-at-Risk (VaR) and Expected Shortfall (ES) risk measures. For each of these criteria, we show that the conditional floor must be higher than a lower bound. We illustrate these results, for a quite general ARCH type model, including the EGARCH (1,1) as a special case.

Keywords: CPPI, conditional floor, ARCH, VaR, expected ehortfall

Procedia PDF Downloads 287
1754 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 56
1753 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 312
1752 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 404
1751 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol

Authors: Inkyu Kim, SangMan Moon

Abstract:

This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.

Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application

Procedia PDF Downloads 370
1750 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 175
1749 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 134
1748 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

Authors: Chung To Kong, Siu Ming Yiu

Abstract:

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing, including turns per second (tps), algorithm, finger trick, hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement techniques. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

Keywords: Rubik's Cube, speed, finger trick, optimization

Procedia PDF Downloads 187
1747 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners

Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda

Abstract:

In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.

Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner

Procedia PDF Downloads 147
1746 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir

Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali

Abstract:

Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:

Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics

Procedia PDF Downloads 125
1745 Fatigue Life Estimation of Tubular Joints - A Comparative Study

Authors: Jeron Maheswaran, Sudath C. Siriwardane

Abstract:

In fatigue analysis, the structural detail of tubular joint has taken great attention among engineers. The DNV-RP-C203 is covering this topic quite well for simple and clear joint cases. For complex joint and geometry, where joint classification isn’t available and limitation on validity range of non-dimensional geometric parameters, the challenges become a fact among engineers. The classification of joint is important to carry out through the fatigue analysis. These joint configurations are identified by the connectivity and the load distribution of tubular joints. To overcome these problems to some extent, this paper compare the fatigue life of tubular joints in offshore jacket according to the stress concentration factors (SCF) in DNV-RP-C203 and finite element method employed Abaqus/CAE. The paper presents the geometric details, material properties and considered load history of the jacket structure. Describe the global structural analysis and identification of critical tubular joints for fatigue life estimation. Hence fatigue life is determined based on the guidelines provided by design codes. Fatigue analysis of tubular joints is conducted using finite element employed Abaqus/CAE [4] as next major step. Finally, obtained SCFs and fatigue lives are compared and their significances are discussed.

Keywords: fatigue life, stress-concentration factor, finite element analysis, offshore jacket structure

Procedia PDF Downloads 431
1744 Comparison of Different Techniques to Estimate Surface Soil Moisture

Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini

Abstract:

Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.

Keywords: artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil

Procedia PDF Downloads 347
1743 Estimation of Heritability and Repeatability for Pre-Weaning Body Weights of Domestic Rabbits Raised in Derived Savanna Zone of Nigeria

Authors: Adewale I. Adeolu, Vivian U. Oleforuh-Okoleh, Sylvester N. Ibe

Abstract:

Heritability and repeatability estimates are needed for the genetic evaluation of livestock populations and consequently for the purpose of upgrading or improvement. Pooled data on 604 progeny from three consecutive parities of purebred rabbit breeds (Chinchilla, Dutch and New Zealand white) raised in Derived Savanna Zone of Nigeria were used to estimate heritability and repeatability for pre-weaning body weights between 1st and 8th week of age. Traits studied include Individual kit weight at birth (IKWB), 2nd week (IK2W), 4th week (IK4W), 6th week (IK6W) and 8th week (IK8W). Nested random effects analysis of (Co)variances as described by Statistical Analysis System (SAS) were employed in the estimation. Respective heritability estimates from the sire component (h2s) and repeatability (R) as intra-class correlations of repeated measurements from the three parties for IKWB, IK2W, IK4W and IK8W are 0.59±0.24, 0.55±0.24, 0.93±0.31, 0.28±0.17, 0.64±0.26 and 0.12±0.14, 0.05±0.14, 0.58±0.02, 0.60±0.11, 0.20±0.14. Heritability and repeatability (except R for IKWB and IK2W) estimates are moderate to high. In conclusion, since pre-weaning body weights in the present study tended to be moderately to highly heritable and repeatable, improvement of rabbits raised in derived savanna zone can be realized through genetic selection criterions.

Keywords: heritability, nested design, parity, pooled data, repeatability

Procedia PDF Downloads 134
1742 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories

Authors: Umesh Kumar Singh, Chanchala Joshi

Abstract:

With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.

Keywords: CVSS score, risk level, security measurement, vulnerability category

Procedia PDF Downloads 306
1741 On Multiobjective Optimization to Improve the Scalability of Fog Application Deployments Using Fogtorch

Authors: Suleiman Aliyu

Abstract:

Integrating IoT applications with Fog systems presents challenges in optimization due to diverse environments and conflicting objectives. This study explores achieving Pareto optimal deployments for Fog-based IoT systems to address growing QoS demands. We introduce Pareto optimality to balance competing performance metrics. Using the FogTorch optimization framework, we propose a hybrid approach (Backtracking search with branch and bound) for scalable IoT deployments. Our research highlights the advantages of Pareto optimality over single-objective methods and emphasizes the role of FogTorch in this context. Initial results show improvements in IoT deployment cost in Fog systems, promoting resource-efficient strategies.

Keywords: pareto optimality, fog application deployment, resource allocation, internet of things

Procedia PDF Downloads 59
1740 Successful Immobilization of Alcohol Dehydrogenase on Natural and Synthetic Support and Its Reaction on Ethanol

Authors: Hiral D. Trivedi, Dinesh S. Patel, Sachin P. Shukla

Abstract:

We have immobilized alcohol dehydrogenase on k-carrageenan, which is a natural polysaccharide obtained from seaweeds by entrapment and on copolymer of acrylamide and 2-hydroxy ethylmethaacrylate by covalent coupling. We have optimized all the immobilization parameters and also carried the comparison studies of both. In case of copolymer of acrylamide and 2-hydroxy ethylmethaacrylate, we have activated both the amino and hydroxyl group individually and simultaneously using different activating agents and obtained some interesting results. We have found that covalently bound enzyme was found to be better under all tested conditions. The reaction on ethanol was carried out using these immobilized systems.

Keywords: alcohol dehydrogenase, acrylamide-co-2-hydroxy ethylmethaacrylate, ethanol, k-carrageenan

Procedia PDF Downloads 126
1739 Grid Computing for Multi-Objective Optimization Problems

Authors: Aouaouche Elmaouhab, Hassina Beggar

Abstract:

Solving multi-objective discrete optimization applications has always been limited by the resources of one machine: By computing power or by memory, most often both. To speed up the calculations, the grid computing represents a primary solution for the treatment of these applications through the parallelization of these resolution methods. In this work, we are interested in the study of some methods for solving multiple objective integer linear programming problem based on Branch-and-Bound and the study of grid computing technology. This study allowed us to propose an implementation of the method of Abbas and Al on the grid by reducing the execution time. To enhance our contribution, the main results are presented.

Keywords: multi-objective optimization, integer linear programming, grid computing, parallel computing

Procedia PDF Downloads 465
1738 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 453
1737 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation

Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone

Abstract:

Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.

Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow

Procedia PDF Downloads 85
1736 Boosting Profits and Enhancement of Environment through Adsorption of Methane during Upstream Processes

Authors: Sudipt Agarwal, Siddharth Verma, S. M. Iqbal, Hitik Kalra

Abstract:

Natural gas as a fuel has created wonders, but on the contrary, the ill-effects of methane have been a great worry for professionals. The largest source of methane emission is the oil and gas industry among all industries. Methane depletes groundwater and being a greenhouse gas has devastating effects on the atmosphere too. Methane remains for a decade or two in the atmosphere and later breaks into carbon dioxide and thus damages it immensely, as it warms up the atmosphere 72 times more than carbon dioxide in those two decades and keeps on harming after breaking into carbon dioxide afterward. The property of a fluid to adhere to the surface of a solid, better known as adsorption, can be a great boon to minimize the hindrance caused by methane. Adsorption of methane during upstream processes can save the groundwater and atmospheric depletion around the site which can be hugely lucrative to earn profits which are reduced due to environmental degradation leading to project cancellation. The paper would deal with reasons why casing and cementing are not able to prevent leakage and would suggest methods to adsorb methane during upstream processes with mathematical explanation using volumetric analysis of adsorption of methane on the surface of activated carbon doped with copper oxides (which increases the absorption by 54%). The paper would explain in detail (through a cost estimation) how the proposed idea can be hugely beneficial not only to environment but also to the profits earned.

Keywords: adsorption, casing, cementing, cost estimation, volumetric analysis

Procedia PDF Downloads 168
1735 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 58
1734 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 102
1733 Structural and Binding Studies of Peptidyl-tRNA Hydrolase from Pseudomonas aeruginosa Provide a Platform for the Structure Based Inhibitor Design against Peptidyl-tRNA Hydrolase

Authors: Sujata Sharma, Avinash Singh, Lovely Gautam, Pradeep Sharma, Mau Sinha, Asha Bhushan, Punit Kaur, Tej P. Singh

Abstract:

Peptidyl-tRNA hydrolase (Pth) Pth is an essential bacterial enzyme that catalyzes the release of free tRNA and peptide moeities from peptidyl tRNAs during stalling of protein synthesis. In order to design inhibitors of Pth from Pseudomonas aeruginosa (PaPth), we have determined the structures of PaPth in its native state and in the bound states with two compounds, amino acylate-tRNA analogue (AAtA) and 5-azacytidine (AZAC). The peptidyl-tRNA hydrolase gene from Pseudomonas aeruginosa was amplified by Phusion High-Fidelity DNA Polymerase using forward and reverse primers, respectively. The E. coliBL21 (λDE3) strain was used for expression of the recombinant peptidyl-tRNA hydrolase from Pseudomonas aeruginosa. The protein was purified using a Ni-NTA superflow column. The crystallization experiments were carried out using hanging drop vapour diffusion method. The crystals diffracted to 1.50 Å resolution. The data were processed using HKL-2000. The polypeptide chain of PaPth consists of 194 amino acid residues from Met1 to Ala194. The centrally located β-structure is surrounded by α-helices from all sides except the side that has entrance to the substrate binding site. The structures of the complexes of PaPth with AAtA and AZAC showed the ligands bound to PaPth in the substrate binding cleft and interacted with protein atoms extensively. The residues that formed intermolecular hydrogen bonds with the atoms of AAtA included Asn12, His22, Asn70, Gly113, Asn116, Ser148, and Glu161 of the symmetry related molecule. The amino acids that were involved in hydrogen bonded interactions in case of AZAC included, His22, Gly113, Asn116, and Ser148. As indicated by fittings of two ligands and the number of interactions made by them with protein atoms, AAtA appears to be a more compatible with the structure of the substrate binding cleft. However, there is a further scope to achieve a better stacking than that of O-tyrosyl moiety because it is not still ideally stacked. These observations about the interactions between the protein and ligands have provided the information about the mode of binding of ligands, nature and number of interactions. This information may be useful for the design of tight inhibitors of Pth enzymes.

Keywords: peptidyl tRNA hydrolase, Acinetobacter baumannii, Pth enzymes, O-tyrosyl

Procedia PDF Downloads 410