Search results for: offline estimation
1526 Human Absorbed Dose Estimation of a New In-111 Imaging Agent Based on Rat Data
Authors: H. Yousefnia, S. Zolghadri
Abstract:
The measurement of organ radiation exposure dose is one of the most important steps to be taken initially, for developing a new radiopharmaceutical. In this study, the dosimetric studies of a novel agent for SPECT-imaging of the bone metastasis, 111In-1,4,7,10-tetraazacyclododecane-1,4,7,10 tetraethylene phosphonic acid (111In-DOTMP) complex, have been carried out to estimate the dose in human organs based on the data derived from rats. The radiolabeled complex was prepared with high radiochemical purity in the optimal conditions. Biodistribution studies of the complex was investigated in the male Syrian rats at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was made based on data derived from the rats by the radiation absorbed dose assessment resource (RADAR) method. 111In-DOTMP complex was prepared with high radiochemical purity of >99% (ITLC). Total body effective absorbed dose for 111In-DOTMP was 0.061 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose with respect to the critical organs is satisfactory within the acceptable range for diagnostic nuclear medicine procedures. Generally, 111In-DOTMP has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastasis in the near future.Keywords: In-111, DOTMP, Internal Dosimetry, RADAR
Procedia PDF Downloads 4071525 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization
Authors: Yihao Kuang, Bowen Ding
Abstract:
With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.Keywords: reinforcement learning, PPO, knowledge inference, supervised learning
Procedia PDF Downloads 671524 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 801523 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves
Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira
Abstract:
Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary
Procedia PDF Downloads 3271522 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study
Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa
Abstract:
The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity
Procedia PDF Downloads 4151521 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays
Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal
Abstract:
Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).Keywords: fault tolerance, FPGA, single event upset, approximate computing
Procedia PDF Downloads 1981520 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 1501519 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners
Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda
Abstract:
In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner
Procedia PDF Downloads 1571518 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir
Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali
Abstract:
Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics
Procedia PDF Downloads 1381517 Fatigue Life Estimation of Tubular Joints - A Comparative Study
Authors: Jeron Maheswaran, Sudath C. Siriwardane
Abstract:
In fatigue analysis, the structural detail of tubular joint has taken great attention among engineers. The DNV-RP-C203 is covering this topic quite well for simple and clear joint cases. For complex joint and geometry, where joint classification isn’t available and limitation on validity range of non-dimensional geometric parameters, the challenges become a fact among engineers. The classification of joint is important to carry out through the fatigue analysis. These joint configurations are identified by the connectivity and the load distribution of tubular joints. To overcome these problems to some extent, this paper compare the fatigue life of tubular joints in offshore jacket according to the stress concentration factors (SCF) in DNV-RP-C203 and finite element method employed Abaqus/CAE. The paper presents the geometric details, material properties and considered load history of the jacket structure. Describe the global structural analysis and identification of critical tubular joints for fatigue life estimation. Hence fatigue life is determined based on the guidelines provided by design codes. Fatigue analysis of tubular joints is conducted using finite element employed Abaqus/CAE [4] as next major step. Finally, obtained SCFs and fatigue lives are compared and their significances are discussed.Keywords: fatigue life, stress-concentration factor, finite element analysis, offshore jacket structure
Procedia PDF Downloads 4531516 Comparison of Different Techniques to Estimate Surface Soil Moisture
Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini
Abstract:
Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.Keywords: artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil
Procedia PDF Downloads 3591515 Estimation of Heritability and Repeatability for Pre-Weaning Body Weights of Domestic Rabbits Raised in Derived Savanna Zone of Nigeria
Authors: Adewale I. Adeolu, Vivian U. Oleforuh-Okoleh, Sylvester N. Ibe
Abstract:
Heritability and repeatability estimates are needed for the genetic evaluation of livestock populations and consequently for the purpose of upgrading or improvement. Pooled data on 604 progeny from three consecutive parities of purebred rabbit breeds (Chinchilla, Dutch and New Zealand white) raised in Derived Savanna Zone of Nigeria were used to estimate heritability and repeatability for pre-weaning body weights between 1st and 8th week of age. Traits studied include Individual kit weight at birth (IKWB), 2nd week (IK2W), 4th week (IK4W), 6th week (IK6W) and 8th week (IK8W). Nested random effects analysis of (Co)variances as described by Statistical Analysis System (SAS) were employed in the estimation. Respective heritability estimates from the sire component (h2s) and repeatability (R) as intra-class correlations of repeated measurements from the three parties for IKWB, IK2W, IK4W and IK8W are 0.59±0.24, 0.55±0.24, 0.93±0.31, 0.28±0.17, 0.64±0.26 and 0.12±0.14, 0.05±0.14, 0.58±0.02, 0.60±0.11, 0.20±0.14. Heritability and repeatability (except R for IKWB and IK2W) estimates are moderate to high. In conclusion, since pre-weaning body weights in the present study tended to be moderately to highly heritable and repeatable, improvement of rabbits raised in derived savanna zone can be realized through genetic selection criterions.Keywords: heritability, nested design, parity, pooled data, repeatability
Procedia PDF Downloads 1471514 Understanding Project Failures in Construction: The Critical Impact of Financial Capacity
Authors: Nnadi Ezekiel Oluwaseun Ejiofor
Abstract:
This research investigates the effects of poor cost estimation, material cost variations, and payment punctuality on the financial health and execution of construction projects in Nigeria. To achieve the objectives of the study, a quantitative research approach was employed, and data was gathered through an online survey of 74 construction industry professionals consisting of quantity surveyors, contractors, and other professionals. The study surveyed input on cost estimation errors, price fluctuations, and payment delays, among other factors. The responses of the respondents were analyzed using a five-point Likert scale and the Relative Importance Index (RII). The findings demonstrated that the errors in cost estimating in the Bill of Quantity (BOQ) have a high degree of negative impact on the reputation and image of the participants in the projects. The greatest effect was experienced on the likelihood of obtaining future endeavors for contractors (mean value = 3.42), followed by the likelihood of obtaining new commissions by quantity surveyors (mean value = 3.40). The level of inaccuracy in costing that undershoots exposes them to risks was most serious in terms of easement of construction and effects of shortage of funds to pursue bankruptcy (hence fears of mean value = 3.78). There was also considerable financial damage as a result of cost underestimation, whereby contractors suffered the worst loss in profit (mean value = 3.88). Every expense comes with its own peculiar risk and uncertainty. Pressure on the cost of materials and every other expense attributed to the building and completion of a structure adds risks to the performance figures of a project. The greatest weight (mean importance score = 4.92) was attributed to issues like market inflation in building materials, while the second greatest weight (mean importance score = 4.76) was due to increased transportation charges. On the other hand, delays in payments arising from issues of the clients like poor availability of funds (RII=0.71) and contracting issues such as disagreements on the valuation of works done (RII=0.72) or other reasons were also found to lead to project delays and additional costs. The results affirm the importance of proper cost estimation on the health of organization finances and project risks and finishes within set time limits. As for the suggestions, it is proposed to progress on the methods of costing, engender better communications with the stakeholders, and manage the delays by way of contracting and financial control. This study enhances the existing literature on construction project management by suggesting ways to deal with adverse cost inaccuracies and availability of materials due to delays in payments which, if addressed, would greatly improve the economic performance of the construction business.Keywords: cost estimation, construction project management, material price fluctuations, payment delays, financial impact
Procedia PDF Downloads 81513 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories
Authors: Umesh Kumar Singh, Chanchala Joshi
Abstract:
With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.Keywords: CVSS score, risk level, security measurement, vulnerability category
Procedia PDF Downloads 3211512 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 4681511 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI
Authors: Zahra Alipour, Amirreza Moheb Afzali
Abstract:
In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)
Procedia PDF Downloads 51510 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow
Procedia PDF Downloads 1101509 Performance Evaluation of Discrete Fourier Transform Algorithm Based PMU for Wide Area Measurement System
Authors: Alpesh Adeshara, Rajendrasinh Jadeja, Praghnesh Bhatt
Abstract:
Implementation of advanced technologies requires sophisticated instruments that deal with the operation, control, restoration and protection of rapidly growing power system network under normal and abnormal conditions. Presently, the applications of Phasor Measurement Unit (PMU) are widely found in real time operation, monitoring, controlling and analysis of power system network as it eliminates the various limitations of Supervisory Control and Data Acquisition System (SCADA) conventionally used in power system. The use of PMU data is very rapidly increasing its importance for online and offline analysis. Wide Area Measurement System (WAMS) is developed as new technology by use of multiple PMUs in power system. The present paper proposes a model of MATLAB based PMU using Discrete Fourier Transform (DFT) algorithm and evaluation of its operation under different contingencies. In this paper, PMU based two bus system having WAMS network is presented as a case study.Keywords: GPS global positioning system, PMU phasor measurement system, WAMS wide area monitoring system, DFT, PDC
Procedia PDF Downloads 4961508 Boosting Profits and Enhancement of Environment through Adsorption of Methane during Upstream Processes
Authors: Sudipt Agarwal, Siddharth Verma, S. M. Iqbal, Hitik Kalra
Abstract:
Natural gas as a fuel has created wonders, but on the contrary, the ill-effects of methane have been a great worry for professionals. The largest source of methane emission is the oil and gas industry among all industries. Methane depletes groundwater and being a greenhouse gas has devastating effects on the atmosphere too. Methane remains for a decade or two in the atmosphere and later breaks into carbon dioxide and thus damages it immensely, as it warms up the atmosphere 72 times more than carbon dioxide in those two decades and keeps on harming after breaking into carbon dioxide afterward. The property of a fluid to adhere to the surface of a solid, better known as adsorption, can be a great boon to minimize the hindrance caused by methane. Adsorption of methane during upstream processes can save the groundwater and atmospheric depletion around the site which can be hugely lucrative to earn profits which are reduced due to environmental degradation leading to project cancellation. The paper would deal with reasons why casing and cementing are not able to prevent leakage and would suggest methods to adsorb methane during upstream processes with mathematical explanation using volumetric analysis of adsorption of methane on the surface of activated carbon doped with copper oxides (which increases the absorption by 54%). The paper would explain in detail (through a cost estimation) how the proposed idea can be hugely beneficial not only to environment but also to the profits earned.Keywords: adsorption, casing, cementing, cost estimation, volumetric analysis
Procedia PDF Downloads 1911507 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation
Authors: Benson Ade Eniola Afere
Abstract:
This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation
Procedia PDF Downloads 701506 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 1171505 New Media and Deliberative Democracy in Malaysia
Authors: Rosyidah Muhamad
Abstract:
This article seeks to access the democratic implication of new media in Malaysia through three important key points of deliberative democracy; information access, rational critical deliberation and mechanism of vertical accountability. The article suggests that the Internet is expanding political opportunity in which contributed to a more diverse discourse. It is depending on how users used it; for democratic or non-democratic outcome. The Internet has been a key instrument in exposing human rights abuse, corruption, organizing protests and mobilizing voters during election campaigns. It therefore pushes for transparency and accountability and thus increasing the rise of deliberative democracy in Malaysia. While there are some elements of an emerging deliberative politics, it is also clear that the Malaysian online political discourse is acting as moderate forms of discourse as the sphere increasingly exist in a chaotic and diversified online discourse. Yet, the online sphere still allows citizens to discuss public affairs. When the public opinion is strong enough, it can influence public policies to ensure that they reflect the public interest. It is suggesting an increased space of negotiation and contestation among the previously muzzled offline situation. This is a big step in the progress democracy in Malaysia.Keywords: Keywords: New Media, democratization, deliberative democracy, Malaysian politics
Procedia PDF Downloads 3001504 Impact of the Pandemic on China's Digital Creative Industries: Mechanisms and Manifestations
Authors: Li Qiaoming
Abstract:
The outbreak of Coronavirus disease 2019 (COVID-19) in early 2020 brought new opportunities to the development of the digital creative industry in China. Based on the realistic foundation of the development of the digital creative industry in China, an analysis was conducted on the mechanism of action of the pandemic on this industry from both sides of supply and demand by sorting out its concept, connotation, and related theories. To be specific, the demand side experienced changes due to the changes in the consumption habits of residents, the sharp increase in gross domestic time (GDT), the satisfaction of the psychological needs of users, search for substitutes for offline consumption, and other factors. An analysis was carried out on the mechanism of action of the pandemic on the digital creative industry from the production link, supply subjects, product characteristics, and transmission link of the supply side. Then, a detailed discussion was held on the manifestation forms of the impact of the pandemic from the dimensions of time and space. Finally, this paper discussed the main development focuses of the digital creative industry in the post-pandemic era from the aspects of the government, industries, and enterprises.Keywords: COVID-19, demand and supply relationship, digital creative industries, industry shocks
Procedia PDF Downloads 1521503 Process Optimization of Mechanochemical Synthesis for the Production of 4,4 Bipyridine Based MOFS using Twin Screw Extrusion and Multivariate Analysis
Authors: Ahmed Metawea, Rodrigo Soto, Majeida Kharejesh, Gavin Walker, Ahmad B. Albadarin
Abstract:
In this study, towards a green approach, we have investigated the effect of operating conditions of solvent assessed twin-screw extruder (TSE) for the production of 4, 4-bipyridine (1-dimensional coordinated polymer (1D)) based coordinated polymer using cobalt nitrate as a metal precursor with molar ratio 1:1. Different operating parameters such as solvent percentage, screw speed and feeding rate are considered. The resultant product is characterized using offline characterization methods, namely Powder X-ray diffraction (PXRD), Raman spectroscopy and scanning electron microscope (SEM) in order to investigate the product purity and surface morphology. A lower feeding rate increased the product’s quality as more resident time was provided for the reaction to take place. The most important influencing factor was the amount of liquid added. The addition of water helped in facilitating the reaction inside the TSE by increasing the surface area of the reaction for particlesKeywords: MOFS, multivariate analysis, process optimization, chemometric
Procedia PDF Downloads 1591502 Three-Dimensional CFD Modeling of Flow Field and Scouring around Bridge Piers
Authors: P. Deepak Kumar, P. R. Maiti
Abstract:
In recent years, sediment scour near bridge piers and abutment is a serious problem which causes nationwide concern because it has resulted in more bridge failures than other causes. Scour is the formation of scour hole around the structure mounted on and embedded in erodible channel bed due to the erosion of soil by flowing water. The formation of scour hole around the structures depends upon shape and size of the pier, depth of flow as well as angle of attack of flow and sediment characteristics. The flow characteristics around these structures change due to man-made obstruction in the natural flow path which changes the kinetic energy of the flow around these structures. Excessive scour affects the stability of the foundation of the structure by the removal of the bed material. The accurate estimation of scour depth around bridge pier is very difficult. The foundation of bridge piers have to be taken deeper and to provide sufficient anchorage length required for stability of the foundation. In this study, computational model simulations using a 3D Computational Fluid Dynamics (CFD) model were conducted to examine the mechanism of scour around a cylindrical pier. Subsequently, the flow characteristics around these structures are presented for different flow conditions. Mechanism of scouring phenomenon, the formation of vortex and its consequent effect is discussed for a straight channel. Effort was made towards estimation of scour depth around bridge piers under different flow conditions.Keywords: bridge pier, computational fluid dynamics, multigrid, pier shape, scour
Procedia PDF Downloads 2961501 Poverty Dynamics in Thailand: Evidence from Household Panel Data
Authors: Nattabhorn Leamcharaskul
Abstract:
This study aims to examine determining factors of the dynamics of poverty in Thailand by using panel data of 3,567 households in 2007-2017. Four techniques of estimation are employed to analyze the situation of poverty across households and time periods: the multinomial logit model, the sequential logit model, the quantile regression model, and the difference in difference model. Households are categorized based on their experiences into 5 groups, namely chronically poor, falling into poverty, re-entering into poverty, exiting from poverty and never poor households. Estimation results emphasize the effects of demographic and socioeconomic factors as well as unexpected events on the economic status of a household. It is found that remittances have positive impact on household’s economic status in that they are likely to lower the probability of falling into poverty or trapping in poverty while they tend to increase the probability of exiting from poverty. In addition, not only receiving a secondary source of household income can raise the probability of being a never poor household, but it also significantly increases household income per capita of the chronically poor and falling into poverty households. Public work programs are recommended as an important tool to relieve household financial burden and uncertainty and thus consequently increase a chance for households to escape from poverty.Keywords: difference in difference, dynamic, multinomial logit model, panel data, poverty, quantile regression, remittance, sequential logit model, Thailand, transfer
Procedia PDF Downloads 1121500 Control of an SIR Model for Basic Reproduction Number Regulation
Authors: Enrique Barbieri
Abstract:
The basic disease-spread model described by three states denoting the susceptible (S), infectious (I), and removed (recovered and deceased) (R) sub-groups of the total population N, or SIR model, has been considered. Heuristic mitigating action profiles of the pharmaceutical and non-pharmaceutical types may be developed in a control design setting for the purpose of reducing the transmission rate or improving the recovery rate parameters in the model. Even though the transmission and recovery rates are not control inputs in the traditional sense, a linear observer and feedback controller can be tuned to generate an asymptotic estimate of the transmission rate for a linearized, discrete-time version of the SIR model. Then, a set of mitigating actions is suggested to steer the basic reproduction number toward unity, in which case the disease does not spread, and the infected population state does not suffer from multiple waves. The special case of piecewise constant transmission rate is described and applied to a seventh-order SEIQRDP model, which segments the population into four additional states. The offline simulations in discrete time may be used to produce heuristic policies implemented by public health and government organizations.Keywords: control of SIR, observer, SEIQRDP, disease spread
Procedia PDF Downloads 1111499 Instant Location Detection of Objects Moving at High Speed in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data off the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as 'signaling parameters' (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of C-OTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as a rule. This report contains describing the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems
Procedia PDF Downloads 4701498 Developing Allometric Equations for More Accurate Aboveground Biomass and Carbon Estimation in Secondary Evergreen Forests, Thailand
Authors: Titinan Pothong, Prasit Wangpakapattanawong, Stephen Elliott
Abstract:
Shifting cultivation is an indigenous agricultural practice among upland people and has long been one of the major land-use systems in Southeast Asia. As a result, fallows and secondary forests have come to cover a large part of the region. However, they are increasingly being replaced by monocultures, such as corn cultivation. This is believed to be a main driver of deforestation and forest degradation, and one of the reasons behind the recurring winter smog crisis in Thailand and around Southeast Asia. Accurate biomass estimation of trees is important to quantify valuable carbon stocks and changes to these stocks in case of land use change. However, presently, Thailand lacks proper tools and optimal equations to quantify its carbon stocks, especially for secondary evergreen forests, including fallow areas after shifting cultivation and smaller trees with a diameter at breast height (DBH) of less than 5 cm. Developing new allometric equations to estimate biomass is urgently needed to accurately estimate and manage carbon storage in tropical secondary forests. This study established new equations using a destructive method at three study sites: approximately 50-year-old secondary forest, 4-year-old fallow, and 7-year-old fallow. Tree biomass was collected by harvesting 136 individual trees (including coppiced trees) from 23 species, with a DBH ranging from 1 to 31 cm. Oven-dried samples were sent for carbon analysis. Wood density was calculated from disk samples and samples collected with an increment borer from 79 species, including 35 species currently missing from the Global Wood Densities database. Several models were developed, showing that aboveground biomass (AGB) was strongly related to DBH, height (H), and wood density (WD). Including WD in the model was found to improve the accuracy of the AGB estimation. This study provides insights for reforestation management, and can be used to prepare baseline data for Thailand’s carbon stocks for the REDD+ and other carbon trading schemes. These may provide monetary incentives to stop illegal logging and deforestation for monoculture.Keywords: aboveground biomass, allometric equation, carbon stock, secondary forest
Procedia PDF Downloads 2841497 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America
Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias
Abstract:
There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.Keywords: economic growth, human capital, inequality, Latin-America
Procedia PDF Downloads 226