Search results for: resource estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4325

Search results for: resource estimation

3485 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 325
3484 Shale Gas and Oil Resource Assessment in Middle and Lower Indus Basin of Pakistan

Authors: Amjad Ali Khan, Muhammad Ishaq Saqi, Kashif Ali

Abstract:

The focus of hydrocarbon exploration in Pakistan has been primarily on conventional hydrocarbon resources. Directorate General Petroleum Concessions (DGPC) has taken the lead on the assessment of indigenous unconventional oil and gas resources, which has resulted in a ‘Shale Oil/Gas Resource Assessment Study’ conducted with the help of USAID. This was critically required in the energy-starved Pakistan, where the gap between indigenous oil & gas production and demand continues to widen for a long time. Exploration & exploitation of indigenous unconventional resources of Pakistan have become vital to meet our energy demand and reduction of oil and gas import bill of the country. This study has attempted to bridge a critical gap in geological information about the potential of shale gas & oil in Pakistan in the four formations, i.e., Sembar, Lower Goru, Ranikot and Ghazij in the Middle and Lower Indus Basins, which were selected for the study as for resource assessment for shale gas & oil. The primary objective of the study was to estimate and establish shale oil/gas resource assessment of the study area by carrying out extensive geological analysis of exploration, appraisal and development wells drilled in the Middle and Lower Indus Basins, along with identification of fairway(s) and sweet spots in the study area. The Study covers the Lower parts of the Middle Indus basins located in Sindh, southern Punjab & eastern parts of the Baluchistan provinces, with a total sedimentary area of 271,795 km2. Initially, 1611 wells were reviewed, including 1324 wells drilled through different shale formations. Based on the availability of required technical data, a detailed petrophysical analysis of 124 wells (21 Confidential & 103 in the public domain) has been conducted for the shale gas/oil potential of the above-referred formations. The core & cuttings samples of 32 wells and 33 geochemical reports of prospective Shale Formations were available, which were analyzed to calibrate the results of petrophysical analysis with petrographic/ laboratory analyses to increase the credibility of the Shale Gas Resource assessment. This study has identified the most prospective intervals, mainly in Sembar and Lower Goru Formations, for shale gas/oil exploration in the Middle and Lower Indus Basins of Pakistan. The study recommends seven (07) sweet spots for undertaking pilot projects, which will enable to evaluate of the actual production capability and production sustainability of shale oil/gas reservoirs of Pakistan for formulating future strategies to explore and exploit shale/oil resources of Pakistan including fiscal incentives required for developing shale oil/gas resources of Pakistan. Some E&P Companies are being persuaded to make a consortium for undertaking pilot projects that have shown their willingness to participate in the pilot project at appropriate times. The location for undertaking the pilot project has been finalized as a result of a series of technical sessions by geoscientists of the potential consortium members after the review and evaluation of available studies.

Keywords: conventional resources, petrographic analysis, petrophysical analysis, unconventional resources, shale gas & oil, sweet spots

Procedia PDF Downloads 40
3483 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 189
3482 Abdominal Pregnancy with a Live Newborn in a Low Resource Setting: A Case Report

Authors: Olivier Mulisya, Guelord Barasima, Henry Mark Lugobe, Philémon Matumo, Bienfait Mumbere Vahwere, Hilaire Mutuka, Zawadi Léocadie, Wesley Lumika

Abstract:

Abdominal pregnancy is defined as pregnancy anywhere within the peritoneal cavity, exclusive of tubal, ovarian, or broad ligament locations. It is a rare form of ectopic pregnancy with high morbidity and mortality for both the mother and the fetus. Diagnosis can be frequently missed in most poor-resource settings because of poor antenatal coverage, low socioeconomic status in most of the patients as well as lack of adequate medical resources. Clinical diagnosis can be very difficult and an ultrasound scan is very helpful during the early stages of gestation but can also be disappointing in the later stages. We report a case of a 25-year-old woman with severe abdominal pain not amended with any medication. A clinical picture of shock lead to an emergency laparotomy which confirmed the diagnosis of abdominal pregnancy. The ministry of health in developing countries should make an effort to make routine early ultrasounds accessible to pregnant women, and obstetricians should keep in mind the possibility of ectopic pregnancy, irrespective of the gestational age.

Keywords: abdominal pregnancy, live new bron, ultrasound imaging, abdominal pain

Procedia PDF Downloads 94
3481 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 414
3480 Public Debt and Fiscal Stability in Nigeria

Authors: Abdulkarim Yusuf

Abstract:

Motivation: The Nigerian economy has seen significant macroeconomic instability, fuelled mostly by an overreliance on fluctuating oil revenues. The rising disparity between tax receipts and government spending in Nigeria necessitates government borrowing to fund the anticipated pace of economic growth. Rising public debt and fiscal sustainability are limiting the government's ability to invest in key infrastructure that promotes private investment and growth in Nigeria. Objective: This paper fills an empirical research vacuum by examining the impact of public debt on fiscal sustainability in Nigeria, given the significance of fiscal stability in decreasing poverty and the constraints that an unsustainable debt burden imposes on it. Data and method: Annual time series data covering the period 1980 to 2022 exposed to conventional and structural breaks stationarity tests and the Autoregressive Distributed Lag estimation approach were adopted for this study. Results: The results reveal that domestic debt stock, debt service payment, foreign reserve stock, exchange rate, and private investment all had a major adverse effect on fiscal stability in the long and short run, corroborating the debt overhang and crowding-out hypothesis. External debt stock, prime lending rate, and degree of trade openness, which boosted fiscal stability in the long run, had a major detrimental effect on fiscal stability in the short run, whereas foreign direct investment inflows had an important beneficial impact on fiscal stability in both the long and short run. Implications: The results indicate that fiscal measures that inspire domestic resource mobilization, sustainable debt management techniques, and dependence on external debt to boost deficit financing will improve fiscal stability and drive growth.

Keywords: ARDL co-integration, debt overhang, debt servicing, fiscal stability, public debt

Procedia PDF Downloads 53
3479 Deployment of Electronic Healthcare Records and Development of Big Data Analytics Capabilities in the Healthcare Industry: A Systematic Literature Review

Authors: Tigabu Dagne Akal

Abstract:

Electronic health records (EHRs) can help to store, maintain, and make the appropriate handling of patient histories for proper treatment and decision. Merging the EHRs with big data analytics (BDA) capabilities enable healthcare stakeholders to provide effective and efficient treatments for chronic diseases. Though there are huge opportunities and efforts that exist in the deployment of EMRs and the development of BDA, there are challenges in addressing resources and organizational capabilities that are required to achieve the competitive advantage and sustainability of EHRs and BDA. The resource-based view (RBV), information system (IS), and non- IS theories should be extended to examine organizational capabilities and resources which are required for successful data analytics in the healthcare industries. The main purpose of this study is to develop a conceptual framework for the development of healthcare BDA capabilities based on past works so that researchers can extend. The research question was formulated for the search strategy as a research methodology. The study selection was made at the end. Based on the study selection, the conceptual framework for the development of BDA capabilities in the healthcare settings was formulated.

Keywords: EHR, EMR, Big data, Big data analytics, resource-based view

Procedia PDF Downloads 128
3478 The South African Polycentric Water Resource Governance-Management Nexus: Parlaying an Institutional Agent and Structured Social Engagement

Authors: J. H. Boonzaaier, A. C. Brent

Abstract:

South Africa, a water scarce country, experiences the phenomenon that its life supporting natural water resources is seriously threatened by the users that are totally dependent on it. South Africa is globally applauded to have of the best and most progressive water laws and policies. There are however growing concerns regarding natural water resource quality deterioration and a critical void in the management of natural resources and compliance to policies due to increasing institutional uncertainties and failures. These are in accordance with concerns of many South African researchers and practitioners that call for a change in paradigm from talk to practice and a more constructive, practical approach to governance challenges in the management of water resources. A qualitative theory-building case study through longitudinal action research was conducted from 2014 to 2017. The research assessed whether a strategic positioned institutional agent can be parlayed to facilitate and execute WRM on catchment level by engaging multiple stakeholders in a polycentric setting. Through a critical realist approach a distinction was made between ex ante self-deterministic human behaviour in the realist realm, and ex post governance-management in the constructivist realm. A congruence analysis, including Toulmin’s method of argumentation analysis, was utilised. The study evaluated the unique case of a self-steering local water management institution, the Impala Water Users Association (WUA) in the Pongola River catchment in the northern part of the KwaZulu-Natal Province of South Africa. Exploiting prevailing water resource threats, it expanded its ancillary functions from 20,000 to 300,000 ha. Embarking on WRM activities, it addressed natural water system quality assessments, social awareness, knowledge support, and threats, such as: soil erosion, waste and effluent into water systems, coal mining, and water security dimensions; through structured engagement with 21 different catchment stakeholders. By implementing a proposed polycentric governance-management model on a catchment scale, the WUA achieved to fill the void. It developed a foundation and capacity to protect the resilience of the natural environment that is critical for freshwater resources to ensure long-term water security of the Pongola River basin. Further work is recommended on appropriate statutory delegations, mechanisms of sustainable funding, sufficient penetration of knowledge to local levels to catalyse behaviour change, incentivised support from professionals, back-to-back expansion of WUAs to alleviate scale and cost burdens, and the creation of catchment data monitoring and compilation centres.

Keywords: institutional agent, water governance, polycentric water resource management, water resource management

Procedia PDF Downloads 132
3477 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 194
3476 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 146
3475 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects

Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang

Abstract:

As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.

Keywords: 4D, 5D, 6D, active BIM

Procedia PDF Downloads 274
3474 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 16
3473 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners

Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda

Abstract:

In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.

Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner

Procedia PDF Downloads 154
3472 Development of Lipid Architectonics for Improving Efficacy and Ameliorating the Oral Bioavailability of Elvitegravir

Authors: Bushra Nabi, Saleha Rehman, Sanjula Baboota, Javed Ali

Abstract:

Aim: The objective of research undertaken is analytical method validation (HPLC method) of an anti-HIV drug Elvitegravir (EVG). Additionally carrying out the forced degradation studies of the drug under different stress conditions to determine its stability. It is envisaged in order to determine the suitable technique for drug estimation, which would be employed in further research. Furthermore, comparative pharmacokinetic profile of the drug from lipid architectonics and drug suspension would be obtained post oral administration. Method: Lipid Architectonics (LA) of EVR was formulated using probe sonication technique and optimized using QbD (Box-Behnken design). For the estimation of drug during further analysis HPLC method has been validation on the parameters (Linearity, Precision, Accuracy, Robustness) and Limit of Detection (LOD) and Limit of Quantification (LOQ) has been determined. Furthermore, HPLC quantification of forced degradation studies was carried out under different stress conditions (acid induced, base induced, oxidative, photolytic and thermal). For pharmacokinetic (PK) study, Albino Wistar rats were used weighing between 200-250g. Different formulations were given per oral route, and blood was collected at designated time intervals. A plasma concentration profile over time was plotted from which the following parameters were determined:

Keywords: AIDS, Elvitegravir, HPLC, nanostructured lipid carriers, pharmacokinetics

Procedia PDF Downloads 136
3471 Fatigue Life Estimation of Tubular Joints - A Comparative Study

Authors: Jeron Maheswaran, Sudath C. Siriwardane

Abstract:

In fatigue analysis, the structural detail of tubular joint has taken great attention among engineers. The DNV-RP-C203 is covering this topic quite well for simple and clear joint cases. For complex joint and geometry, where joint classification isn’t available and limitation on validity range of non-dimensional geometric parameters, the challenges become a fact among engineers. The classification of joint is important to carry out through the fatigue analysis. These joint configurations are identified by the connectivity and the load distribution of tubular joints. To overcome these problems to some extent, this paper compare the fatigue life of tubular joints in offshore jacket according to the stress concentration factors (SCF) in DNV-RP-C203 and finite element method employed Abaqus/CAE. The paper presents the geometric details, material properties and considered load history of the jacket structure. Describe the global structural analysis and identification of critical tubular joints for fatigue life estimation. Hence fatigue life is determined based on the guidelines provided by design codes. Fatigue analysis of tubular joints is conducted using finite element employed Abaqus/CAE [4] as next major step. Finally, obtained SCFs and fatigue lives are compared and their significances are discussed.

Keywords: fatigue life, stress-concentration factor, finite element analysis, offshore jacket structure

Procedia PDF Downloads 448
3470 Modern Nahwu's View about the Theory of Amil

Authors: Kisno Umbar

Abstract:

Arabic grammar (nahwu) is one of the most important disciplines to learn about the Islamic literature (kitab al-turats). In the last century, learning Arabic grammar was difficult for both the Arabian or non-Arabian native. Most of the traditional nahwu scholars viewed that the theory of amil is a major problem. The views had influenced large number of modern nahwu scholars, and some of them refuse the theory of amil to simplify Arabic grammar to make it easier. The aim of the study is to compare many views of the modern nahwu scholars about the theory of amil including their reasons. In addition, the study is to reveal whether they follow classic scholars or give a view. The author uses literature study approach to get data of modern nahwu scholars from their books as a primary resource. As a secondary resource, the author uses the updated relevant researches from journals about the theory of amil. Besides, the author put on several resources from the traditional nahwu scholars to compare the views. The analysis showed the contrasting views about the theory of amil. Most of the scholars refuse the amil because it isn’t originally derived from Arabic tradition, but it is influenced by Aristotelian philosophy. The others persistently use the amil inasmuch as it is one of the characteristics that differ Arabic language and other languages.

Keywords: Arabic grammar, Amil, Arabic tradition, Aristotelian philosophy

Procedia PDF Downloads 154
3469 Comparison of Different Techniques to Estimate Surface Soil Moisture

Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini

Abstract:

Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.

Keywords: artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil

Procedia PDF Downloads 357
3468 Estimation of Heritability and Repeatability for Pre-Weaning Body Weights of Domestic Rabbits Raised in Derived Savanna Zone of Nigeria

Authors: Adewale I. Adeolu, Vivian U. Oleforuh-Okoleh, Sylvester N. Ibe

Abstract:

Heritability and repeatability estimates are needed for the genetic evaluation of livestock populations and consequently for the purpose of upgrading or improvement. Pooled data on 604 progeny from three consecutive parities of purebred rabbit breeds (Chinchilla, Dutch and New Zealand white) raised in Derived Savanna Zone of Nigeria were used to estimate heritability and repeatability for pre-weaning body weights between 1st and 8th week of age. Traits studied include Individual kit weight at birth (IKWB), 2nd week (IK2W), 4th week (IK4W), 6th week (IK6W) and 8th week (IK8W). Nested random effects analysis of (Co)variances as described by Statistical Analysis System (SAS) were employed in the estimation. Respective heritability estimates from the sire component (h2s) and repeatability (R) as intra-class correlations of repeated measurements from the three parties for IKWB, IK2W, IK4W and IK8W are 0.59±0.24, 0.55±0.24, 0.93±0.31, 0.28±0.17, 0.64±0.26 and 0.12±0.14, 0.05±0.14, 0.58±0.02, 0.60±0.11, 0.20±0.14. Heritability and repeatability (except R for IKWB and IK2W) estimates are moderate to high. In conclusion, since pre-weaning body weights in the present study tended to be moderately to highly heritable and repeatable, improvement of rabbits raised in derived savanna zone can be realized through genetic selection criterions.

Keywords: heritability, nested design, parity, pooled data, repeatability

Procedia PDF Downloads 144
3467 Blending Synchronous with Asynchronous Learning Tools: Students’ Experiences and Preferences for Online Learning Environment in a Resource-Constrained Higher Education Situations in Uganda

Authors: Stephen Kyakulumbye, Vivian Kobusingye

Abstract:

Generally, World over, COVID-19 has had adverse effects on all sectors but with more debilitating effects on the education sector. After reactive lockdowns, education institutions that could continue teaching and learning had to go a distance mediated by digital technological tools. In Uganda, the Ministry of Education thereby issued COVID-19 Online Distance E-learning (ODeL) emergent guidelines. Despite such guidelines, academic institutions in Uganda and similar developing contexts with academically constrained resource environments were caught off-guard and ill-prepared to transform from face-to-face learning to online distance learning mode. Most academic institutions that migrated spontaneously did so with no deliberate tools, systems, strategies, or software to cause active, meaningful, and engaging learning for students. By experience, most of these academic institutions shifted to Zoom and WhatsApp and instead conducted online teaching in real-time than blended synchronous and asynchronous tools. This paper provides students’ experiences while blending synchronous and asynchronous content-creating and learning tools within a technological resource-constrained environment to navigate in such a challenging Uganda context. These conceptual case-based findings, using experience from Uganda Christian University (UCU), point at the design of learning activities with two certain characteristics, the enhancement of synchronous learning technologies with asynchronous ones to mitigate the challenge of system breakdown, passive learning to active learning, and enhances the types of presence (social, cognitive and facilitatory). The paper, both empirical and experiential in nature, uses online experiences from third-year students in Bachelor of Business Administration student lectured using asynchronous text, audio, and video created with Open Broadcaster Studio software and compressed with Handbrake, all open-source software to mitigate disk space and bandwidth usage challenges. The synchronous online engagements with students were a blend of zoom or BigBlueButton, to ensure that students had an alternative just in case one failed due to excessive real-time traffic. Generally, students report that compared to their previous face-to-face lectures, the pre-recorded lectures via Youtube provided them an opportunity to reflect on content in a self-paced manner, which later on enabled them to engage actively during the live zoom and/or BigBlueButton real-time discussions and presentations. The major recommendation is that lecturers and teachers in a resource-constrained environment with limited digital resources like the internet and digital devices should harness this approach to offer students access to learning content in a self-paced manner and thereby enabling reflective active learning through reflective and high-order thinking.

Keywords: synchronous learning, asynchronous learning, active learning, reflective learning, resource-constrained environment

Procedia PDF Downloads 133
3466 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories

Authors: Umesh Kumar Singh, Chanchala Joshi

Abstract:

With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.

Keywords: CVSS score, risk level, security measurement, vulnerability category

Procedia PDF Downloads 318
3465 Hybrid Localization Schemes for Wireless Sensor Networks

Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir

Abstract:

This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.

Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks

Procedia PDF Downloads 465
3464 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation

Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone

Abstract:

Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.

Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow

Procedia PDF Downloads 102
3463 Boosting Profits and Enhancement of Environment through Adsorption of Methane during Upstream Processes

Authors: Sudipt Agarwal, Siddharth Verma, S. M. Iqbal, Hitik Kalra

Abstract:

Natural gas as a fuel has created wonders, but on the contrary, the ill-effects of methane have been a great worry for professionals. The largest source of methane emission is the oil and gas industry among all industries. Methane depletes groundwater and being a greenhouse gas has devastating effects on the atmosphere too. Methane remains for a decade or two in the atmosphere and later breaks into carbon dioxide and thus damages it immensely, as it warms up the atmosphere 72 times more than carbon dioxide in those two decades and keeps on harming after breaking into carbon dioxide afterward. The property of a fluid to adhere to the surface of a solid, better known as adsorption, can be a great boon to minimize the hindrance caused by methane. Adsorption of methane during upstream processes can save the groundwater and atmospheric depletion around the site which can be hugely lucrative to earn profits which are reduced due to environmental degradation leading to project cancellation. The paper would deal with reasons why casing and cementing are not able to prevent leakage and would suggest methods to adsorb methane during upstream processes with mathematical explanation using volumetric analysis of adsorption of methane on the surface of activated carbon doped with copper oxides (which increases the absorption by 54%). The paper would explain in detail (through a cost estimation) how the proposed idea can be hugely beneficial not only to environment but also to the profits earned.

Keywords: adsorption, casing, cementing, cost estimation, volumetric analysis

Procedia PDF Downloads 185
3462 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 67
3461 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 114
3460 Standard Resource Parameter Based Trust Model in Cloud Computing

Authors: Shyamlal Kumawat

Abstract:

Cloud computing is shifting the approach IT capital are utilized. Cloud computing dynamically delivers convenient, on-demand access to shared pools of software resources, platform and hardware as a service through internet. The cloud computing model—made promising by sophisticated automation, provisioning and virtualization technologies. Users want the ability to access these services including infrastructure resources, how and when they choose. To accommodate this shift in the consumption model technology has to deal with the security, compatibility and trust issues associated with delivering that convenience to application business owners, developers and users. Absent of these issues, trust has attracted extensive attention in Cloud computing as a solution to enhance the security. This paper proposes a trusted computing technology through Standard Resource parameter Based Trust Model in Cloud Computing to select the appropriate cloud service providers. The direct trust of cloud entities is computed on basis of the interaction evidences in past and sustained on its present performances. Various SLA parameters between consumer and provider are considered in trust computation and compliance process. The simulations are performed using CloudSim framework and experimental results show that the proposed model is effective and extensible.

Keywords: cloud, Iaas, Saas, Paas

Procedia PDF Downloads 328
3459 Aligning the Sustainability Policy Areas for Decarbonisation and Value Addition at an Organisational Level

Authors: Bishal Baniya

Abstract:

This paper proposes the sustainability related policy areas for decarbonisation and value addition at an organizational level. General and public sector organizations around the world are usually significant in terms of consuming resources and producing waste – powered through their massive procurement capacity. However, these organizations also possess huge potential to cut resource use and emission as many of these organizations controls supply chain of goods/services. They can therefore be a trend setter and can easily lead other major economic sectors such as manufacturing, construction and mining, transportation, etc. in pursuit towards paradigm shift for sustainability. Whilst the environmental and social awareness has improved in recent years and they have identified policy areas to improve the organizational environmental performance, value addition to the core business of the organization hasn’t been understood and interpreted correctly. This paper therefore investigates ways to align sustainability policy measures in a way that it creates better value proposition relative to benchmark by accounting both eco and social efficiency. Preliminary analysis shows co-benefits other than resource and cost savings fosters the business cases for organizations and this can be achieved by better aligning the policy measures and engaging stakeholders.

Keywords: policy measures, environmental performance, value proposition, organisational level

Procedia PDF Downloads 146
3458 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System

Authors: Imran Dayan, Ashiqul Khan

Abstract:

Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.

Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining

Procedia PDF Downloads 327
3457 A Collaborative Problem Driven Approach to Design an HR Analytics Application

Authors: L. Atif, C. Rosenthal-Sabroux, M. Grundstein

Abstract:

The requirements engineering process is a crucial phase in the design of complex systems. The purpose of our research is to present a collaborative problem-driven requirements engineering approach that aims at improving the design of a Decision Support System as an Analytics application. This approach has been adopted to design a Human Resource management DSS. The Requirements Engineering process is presented as a series of guidelines for activities that must be implemented to assure that the final product satisfies end-users requirements and takes into account the limitations identified. For this, we know that a well-posed statement of the problem is “a problem whose crucial character arises from collectively produced estimation and a formulation found to be acceptable by all the parties”. Moreover, we know that DSSs were developed to help decision-makers solve their unstructured problems. So, we thus base our research off of the assumption that developing DSS, particularly for helping poorly structured or unstructured decisions, cannot be done without considering end-user decision problems, how to represent them collectively, decisions content, their meaning, and the decision-making process; thus, arise the field issues in a multidisciplinary perspective. Our approach addresses a problem-driven and collaborative approach to designing DSS technologies: It will reflect common end-user problems in the upstream design phase and in the downstream phase these problems will determine the design choices and potential technical solution. We will thus rely on a categorization of HR’s problems for a development mirroring the Analytics solution. This brings out a new data-driven DSS typology: Descriptive Analytics, Explicative or Diagnostic Analytics, Predictive Analytics, Prescriptive Analytics. In our research, identifying the problem takes place with design of the solution, so, we would have to resort a significant transformations of representations associated with the HR Analytics application to build an increasingly detailed representation of the goal to be achieved. Here, the collective cognition is reflected in the establishment of transfer functions of representations during the whole of the design process.

Keywords: DSS, collaborative design, problem-driven requirements, analytics application, HR decision making

Procedia PDF Downloads 292
3456 NanoSat MO Framework: Simulating a Constellation of Satellites with Docker Containers

Authors: César Coelho, Nikolai Wiegand

Abstract:

The advancement of nanosatellite technology has opened new avenues for cost-effective and faster space missions. The NanoSat MO Framework (NMF) from the European Space Agency (ESA) provides a modular and simpler approach to the development of flight software and operations of small satellites. This paper presents a methodology using the NMF together with Docker for simulating constellations of satellites. By leveraging Docker containers, the software environment of individual satellites can be easily replicated within a simulated constellation. This containerized approach allows for rapid deployment, isolation, and management of satellite instances, facilitating comprehensive testing and development in a controlled setting. By integrating the NMF lightweight simulator in the container, a comprehensive simulation environment was achieved. A significant advantage of using Docker containers is their inherent scalability, enabling the simulation of hundreds or even thousands of satellites with minimal overhead. Docker's lightweight nature ensures efficient resource utilization, allowing for deployment on a single host or across a cluster of hosts. This capability is crucial for large-scale simulations, such as in the case of mega-constellations, where multiple traditional virtual machines would be impractical due to their higher resource demands. This ability for easy horizontal scaling based on the number of simulated satellites provides tremendous flexibility to different mission scenarios. Our results demonstrate that leveraging Docker containers with the NanoSat MO Framework provides a highly efficient and scalable solution for simulating satellite constellations, offering not only significant benefits in terms of resource utilization and operational flexibility but also enabling testing and validation of ground software for constellations. The findings underscore the importance of taking advantage of already existing technologies in computer science to create new solutions for future satellite constellations in space.

Keywords: containerization, docker containers, NanoSat MO framework, satellite constellation simulation, scalability, small satellites

Procedia PDF Downloads 42