Search results for: Weibull distribution model
14569 Gender and Science: Is the Association Universal?
Authors: Neelam Kumar
Abstract:
Science is stratified, with an unequal distribution of research facilities and rewards among scientists. Gender stratification is one of the most prevalent phenomena in the world of science. In most countries gender segregation, horizontal as well as vertical, stands out in the field of science and engineering. India is no exception. This paper aims to examine: (1) gender and science associations, historical as well as contemporary, (2) women’s enrolment and gender differences in selection of academic fields, (2) women as professional researchers, (3) career path and recognition/trajectories. The paper reveals that in recent years the gender–science relationship has changed, but is not totally free from biases. Women’s enrolment into various science disciplines has shown remarkable and steady increase in most parts of the world, including India, yet they remain underrepresented in the S&T workforce, although to a lesser degree than in the past.Keywords: gender, science, universal, women
Procedia PDF Downloads 30814568 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 16414567 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method
Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge
Abstract:
A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty
Procedia PDF Downloads 1614566 Modeling The Deterioration Of Road Bridges At The Provincial Level In Laos
Authors: Hatthaphone Silimanotham, Michael Henry
Abstract:
The effective maintenance of road bridge infrastructure is becoming a widely researched topic in the civil engineering field. Deterioration is one of the main issues in bridge performance, and it is necessary to understand how bridges deteriorate to optimally plan budget allocation for bridge maintenance. In Laos, many bridges are in a deteriorated state, which may affect the performance of the bridge. Due to bridge deterioration, the Ministry of Public Works and Transport is interested in the deterioration model to allocate the budget efficiently and support the bridge maintenance planning. A deterioration model can be used to predict the bridge condition in the future based on the observed behavior in the past. This paper analyzes the available inspection data of road bridges on the road classifications network to build deterioration prediction models for the main bridge type found at the provincial level (concrete slab, concrete girder, and steel truss) using probabilistic deterioration modeling by linear regression method. The analysis targets there has three bridge types in the 18 provinces of Laos and estimates the bridge deterioration rating for evaluating the bridge's remaining life. This research thus considers the relationship between the service period and the bridge condition to represent the probability of bridge condition in the future. The results of the study can be used for a variety of bridge management tasks, including maintenance planning, budgeting, and evaluating bridge assets.Keywords: deterioration model, bridge condition, bridge management, probabilistic modeling
Procedia PDF Downloads 15914565 Analysis of the Content of Sugars, Vitamin C, Preservatives, Synthetic Dyes, Sweeteners, Sodium and Potassium and Microbiological Purity in Selected Products Made From Fruit and Vegetables in Small Regional Factories and in Large Food Corporations
Authors: Katarzyna Miśkiewicz, Magdalena Lasoń-Rydel, Małgorzata Krępska, Katarzyna Sieczyńska, Iwona Masłowska-Lipowicz, Katarzyna Ławińska
Abstract:
The aim of the study was to analyse a selection of 12 pasteurised products made from fruit and vegetables, such as fruit juices, fruit drinks, jams, marmalades and jam produced by small regional factories as well as large food corporations. The research was carried out as part of the project "Innovative system of healthy and regional food distribution", funded by the Ministry of Education and Science (Poland), which aims to create an economically and organisationally strong agri-food industry in Poland through effective cooperation between scientific and socio-economic actors. The main activities of the project include support for the creation of new distribution channels for regional food products and their easy access to a wide group of potential customers while maintaining the highest quality standards. One of the key areas of the project is food quality analyses conducted to indicate the competitive advantage of regional products. Presented here are studies on the content of sugars, vitamin C, preservatives, synthetic colours, sweeteners, sodium and potassium, as well as studies on the microbiological purity of selected products made from fruit and vegetables. The composition of products made from fruit and vegetables varies greatly and depends on both the type of raw material and the way it is processed. Of the samples tested, fruit drinks contained the least amount of sugars, and jam and marmalade made by large producers and bought in large chain stores contained the most. However, the low sugar content of some fruit drinks is due to the presence of the sweetener sucralose in their composition. The vitamin C content of the samples varied, being higher in products where it was added during production. All products made in small local factories were free of food additives such as preservatives, sweeteners and synthetic colours, indicating their superiority over products made by large producers. Products made in small local factories were characterised by a relatively high potassium content. The microbiological purity of commercial products was confirmed - no Salmonella spp. were detected, and the number of mesophilic bacteria, moulds, yeasts, and β-glucuronidase-positive E. coli was below the limit of quantification.Keywords: fruit and vegetable products, sugars, food additives, HPLC, ICP-OES
Procedia PDF Downloads 9414564 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 18614563 Variational Explanation Generator: Generating Explanation for Natural Language Inference Using Variational Auto-Encoder
Authors: Zhen Cheng, Xinyu Dai, Shujian Huang, Jiajun Chen
Abstract:
Recently, explanatory natural language inference has attracted much attention for the interpretability of logic relationship prediction, which is also known as explanation generation for Natural Language Inference (NLI). Existing explanation generators based on discriminative Encoder-Decoder architecture have achieved noticeable results. However, we find that these discriminative generators usually generate explanations with correct evidence but incorrect logic semantic. It is due to that logic information is implicitly encoded in the premise-hypothesis pairs and difficult to model. Actually, logic information identically exists between premise-hypothesis pair and explanation. And it is easy to extract logic information that is explicitly contained in the target explanation. Hence we assume that there exists a latent space of logic information while generating explanations. Specifically, we propose a generative model called Variational Explanation Generator (VariationalEG) with a latent variable to model this space. Training with the guide of explicit logic information in target explanations, latent variable in VariationalEG could capture the implicit logic information in premise-hypothesis pairs effectively. Additionally, to tackle the problem of posterior collapse while training VariaztionalEG, we propose a simple yet effective approach called Logic Supervision on the latent variable to force it to encode logic information. Experiments on explanation generation benchmark—explanation-Stanford Natural Language Inference (e-SNLI) demonstrate that the proposed VariationalEG achieves significant improvement compared to previous studies and yields a state-of-the-art result. Furthermore, we perform the analysis of generated explanations to demonstrate the effect of the latent variable.Keywords: natural language inference, explanation generation, variational auto-encoder, generative model
Procedia PDF Downloads 15114562 Bridging the Gap through New Media Technology Acceptance: Exploring Chinese Family Business Culture
Authors: Farzana Sharmin, Mohammad Tipu Sultan
Abstract:
Emerging new media technology such as social media and social networking sites have changed the family business dynamics in Eastern Asia. The family business trends in China has been developed at an exponential rate towards technology. In the last two decades, many of this family business has succeeded in becoming major players in the Chinese and world economy. But there are a very few availabilities of literature on Chinese context regarding social media acceptance in terms of the family business. Therefore, this study has tried to cover the gap between culture and new media technology to understand the attitude of Chinese young entrepreneurs’ towards the family business. This paper focused on two cultural dimensions (collectivism, long-term orientation), which are adopted from Greet Hofstede’s. Additionally perceived usefulness and ease of use adopted from the Technology Acceptance Model (TAM) to explore the actual behavior of technology acceptance for the family business. A quantitative survey method (n=275) used to collect data Chinese family business owners' in Shanghai. The inferential statistical analysis was applied to extract trait factors, and verification of the model, respectively. The research results found that using social media for family business promotion has highly influenced by cultural values (collectivism and long-term orientation). The theoretical contribution of this research may also assist policymakers and practitioners of other developing countries to advertise and promote the family business through social media.Keywords: China, cultural dimensions, family business, technology acceptance model, TAM
Procedia PDF Downloads 14814561 Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models
Authors: Keyi Wang
Abstract:
Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures.Keywords: deep learning, hand gesture recognition, computer vision, image processing
Procedia PDF Downloads 13914560 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 12814559 An Automated Procedure for Estimating the Glomerular Filtration Rate and Determining the Normality or Abnormality of the Kidney Stages Using an Artificial Neural Network
Authors: Hossain A., Chowdhury S. I.
Abstract:
Introduction: The use of a gamma camera is a standard procedure in nuclear medicine facilities or hospitals to diagnose chronic kidney disease (CKD), but the gamma camera does not precisely stage the disease. The authors sought to determine whether they could use an artificial neural network to determine whether CKD was in normal or abnormal stages based on GFR values (ANN). Method: The 250 kidney patients (Training 188, Testing 62) who underwent an ultrasonography test to diagnose a renal test in our nuclear medical center were scanned using a gamma camera. Before the scanning procedure, the patients received an injection of ⁹⁹ᵐTc-DTPA. The gamma camera computes the pre- and post-syringe radioactive counts after the injection has been pushed into the patient's vein. The artificial neural network uses the softmax function with cross-entropy loss to determine whether CKD is normal or abnormal based on the GFR value in the output layer. Results: The proposed ANN model had a 99.20 % accuracy according to K-fold cross-validation. The sensitivity and specificity were 99.10 and 99.20 %, respectively. AUC was 0.994. Conclusion: The proposed model can distinguish between normal and abnormal stages of CKD by using an artificial neural network. The gamma camera could be upgraded to diagnose normal or abnormal stages of CKD with an appropriate GFR value following the clinical application of the proposed model.Keywords: artificial neural network, glomerular filtration rate, stages of the kidney, gamma camera
Procedia PDF Downloads 10314558 Catalytic Cracking of Hydrocarbon over Zeolite Based Catalysts
Authors: Debdut Roy, Vidyasagar Guggilla
Abstract:
In this research, we highlight our exploratory work on modified zeolite based catalysts for catalytic cracking of hydrocarbons for production of light olefin i.e. ethylene and propylene. The work is focused on understanding the catalyst structure and activity correlation. Catalysts are characterized by surface area and pore size distribution analysis, inductively coupled plasma optical emission spectrometry (ICP-OES), Temperature Programmed Desorption (TPD) of ammonia, pyridine Fourier transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), Thermo-gravimetric Analysis (TGA) and correlated with the catalytic activity. It is observed that the yield of lighter olefins increases with increase of Bronsted acid strength.Keywords: catalytic cracking, zeolite, propylene, structure-activity correlation
Procedia PDF Downloads 21814557 Threshold (K, P) Quantum Distillation
Authors: Shashank Gupta, Carlos Cid, William John Munro
Abstract:
Quantum distillation is the task of concentrating quantum correlations present in N imperfect copies to M perfect copies (M < N) using free operations by involving all P the parties sharing the quantum correlation. We present a threshold quantum distillation task where the same objective is achieved but using lesser number of parties (K < P). In particular, we give an exact local filtering operations by the participating parties sharing high dimension multipartite entangled state to distill the perfect quantum correlation. Later, we bridge a connection between threshold quantum entanglement distillation and quantum steering distillation and show that threshold distillation might work in the scenario where general distillation protocol like DEJMPS does not work.Keywords: quantum networks, quantum distillation, quantum key distribution, entanglement distillation
Procedia PDF Downloads 4514556 CFD Simulation of Surge Wave Generated by Flow-Like Landslides
Authors: Liu-Chao Qiu
Abstract:
The damage caused by surge waves generated in water bodies by flow-like landslides can be very high in terms of human lives and economic losses. The complicated phenomena occurred in this highly unsteady process are difficult to model because three interacting phases: air, water and sediment are involved. The problem therefore is challenging since the effects of non-Newtonian fluid describing the rheology of the flow-like landslides, multi-phase flow and free surface have to be included in the simulation. In this work, the commercial computational fluid dynamics (CFD) package FLUENT is used to model the surge waves due to flow-like landslides. The comparison between the numerical results and experimental data reported in the literature confirms the accuracy of the method.Keywords: flow-like landslide, surge wave, VOF, non-Newtonian fluids, multi-phase flows, free surface flow
Procedia PDF Downloads 41614555 Synthesis and Characterization of New Polyesters Based on Diarylidene-1-Methyl-4-Piperidone
Authors: Tareg M. Elsunaki, Suleiman A. Arafa, Mohamed A. Abd-Alla
Abstract:
New interesting thermal stable polyesters containing 1-methyl-4-piperidone moiety in the main chain have been synthesized. These polyesters were synthesized by interfacial polycondensation technique of 3,5-bis(4-hydroxybenzylidene)-1-methyl-4-piperidone (I) and 3,5-bis(4-hydroxy-3-methoxy benzyli-dene)-1-methyl-4-piperidone (II) with terphthaloyl, isophthaloyl, 4,4'-diphenic, adipoyl and sebacoyl dichlorides. The yield and the values of the reduced viscosity of the produced polyesters were found to be affected by the type of an organic phase. In order to characterize these polymers, the necessary model compounds (A), (B) were prepared from (I), (II) respectively and benzoyl chloride. The structure of monomers (I), (II), model compounds and resulting polyesters were confirmed by IR, elemental analysis and 1HNMR spectroscopy. The various characteristic of the resulting polymers including solubility, thermal properties, viscosity and X-ray analysis were also studied.Keywords: synthesis, characterization, new polyesters, chemistry
Procedia PDF Downloads 45814554 Factors Affecting Customer Loyalty in the Independent Surveyor Service Industry in Indonesia
Authors: Sufrin Hannan, Budi Suharjo, Rita Nurmalina, Kirbrandoko
Abstract:
The challenge for independent surveyor service companies now is growing with increasing uncertainty in business. Protection from the government for domestic independent surveyor industry from competitor attack, such as entering the global surveyors to Indonesia also no longer exists. Therefore, building customer loyalty becomes very important to create a long-term relationship between an independent surveyor with its customers. This study aims to develop a model that can be used to build customer loyalty by looking at various factors that determine customer loyalty, especially on independent surveyors for coal inspection in Indonesia. The development of this model uses the relationship marketing approach. Testing of the hypothesis is done by testing the variables that determine customer loyalty, either directly or indirectly, which amounted to 10 variables. The data were collected from 200 questionnaires filled by independent surveyor company decision makers from 51 exporting companies and coal trading companies in Indonesia and analyzed using Structural Equation Model (SEM). The results show that customer loyalty of independent surveyors is influenced by customer satisfaction, trust, switching-barrier, and relationship-bond. Research on customer satisfaction shows that customer satisfaction is influenced by the perceived quality and perceived value, while perceived quality is influenced by reliability, assurance, responsiveness, and empathy.Keywords: relationship marketing, customer loyalty, customer satisfaction, switching barriers, relationship bonds
Procedia PDF Downloads 16914553 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 15114552 Determinants of Budget Performance in an Oil-Based Economy
Authors: Adeola Adenikinju, Olusanya E. Olubusoye, Lateef O. Akinpelu, Dilinna L. Nwobi
Abstract:
Since the enactment of the Fiscal Responsibility Act (2007), the Federal Government of Nigeria (FGN) has made public its fiscal budget and the subsequent implementation report. A critical review of these documents shows significant variations in the five macroeconomic variables which are inputs in each Presidential budget; oil Production target (mbpd), oil price ($), Foreign exchange rate(N/$), and Gross Domestic Product growth rate (%) and inflation rate (%). This results in underperformance of the Federal budget expected output in terms of non-oil and oil revenue aggregates. This paper evaluates first the existing variance between budgeted and actuals, then the relationship and causality between the determinants of Federal fiscal budget assumptions, and finally the determinants of FGN’s Gross Oil Revenue. The paper employed the use of descriptive statistics, the Autoregressive distributed lag (ARDL) model, and a Profit oil probabilistic model to achieve these objectives. This model permits for both the static and dynamic effect(s) of the independent variable(s) on the dependent variable, unlike a static model that accounts for static or fixed effect(s) only. It offers a technique for checking the existence of a long-run relationship between variables, unlike other tests of cointegration, such as the Engle-Granger and Johansen tests, which consider only non-stationary series that are integrated of the same order. Finally, even with small sample size, the ARDL model is known to generate a valid result, for it is the dependent variable and is the explanatory variable. The results showed that there is a long-run relationship between oil revenue as a proxy for budget performance and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a short-run relationship between oil revenue and its determinants; oil price, produced oil quantity, and foreign exchange rate. There is a long-run relationship between non-oil revenue and its determinants; inflation rate, GDP growth rate, and foreign exchange rate. The grangers’ causality test results show that there is a mono-directional causality between oil revenue and its determinants. The Federal budget assumptions only explain 68% of oil revenue and 62% of non-oil revenue. There is a mono-directional causality between non-oil revenue and its determinants. The Profit oil Model describes production sharing contracts, joint ventures, and modified carrying arrangements as the greatest contributors to FGN’s gross oil revenue. This provides empirical justification for the selected macroeconomic variables used in the Federal budget design and performance evaluation. The research recommends other variables, debt and money supply, be included in the Federal budget design to explain the Federal budget revenue performance further.Keywords: ARDL, budget performance, oil price, oil quantity, oil revenue
Procedia PDF Downloads 17214551 Modeling of CREB Pathway Induced Gene Induction: From Stimulation to Repression
Authors: K. Julia Rose Mary, Victor Arokia Doss
Abstract:
Electrical and chemical stimulations up-regulate phosphorylaion of CREB, a transcriptional factor that induces its target gene production for memory consolidation and Late Long-Term Potentiation (L-LTP) in CA1 region of the hippocampus. L-LTP requires complex interactions among second-messenger signaling cascade molecules such as cAMP, CAMKII, CAMKIV, MAPK, RSK, PKA, all of which converge to phosphorylate CREB which along with CBP induces the transcription of target genes involved in memory consolidation. A differential equation based model for L-LTP representing stimulus-mediated activation of downstream mediators which confirms the steep, supralinear stimulus-response effects of activation and inhibition was used. The same was extended to accommodate the inhibitory effect of the Inducible cAMP Early Repressor (ICER). ICER is the natural inducible CREB antagonist represses CRE-Mediated gene transcription involved in long-term plasticity for learning and memory. After verifying the sensitivity and robustness of the model, we had simulated it with various empirical levels of repressor concentration to analyse their effect on the gene induction. The model appears to predict the regulatory dynamics of repression on the L-LTP and agrees with the experimental values. The flux data obtained in the simulations demonstrate various aspects of equilibrium between the gene induction and repression.Keywords: CREB, L-LTP, mathematical modeling, simulation
Procedia PDF Downloads 29414550 Some Observations on the Preparation of Zinc Hydroxide Nitrate Nanoparticles
Authors: Krasimir Ivanov, Elitsa Kolentsova, Nguyen Nguyen, Alexander Peltekov, Violina Angelova
Abstract:
The nanosized zinc hydroxide nitrate has been recently estimated as perspective foliar fertilizer, which has improved zinc solubility, but low phytotoxicity, in comparison with ZnO and other Zn containing compounds. The main problem is obtaining of stable particles with dimensions less than 100 nm. This work studies the effect of preparation conditions on the chemical compositions and particle size of the zinc hydroxide nitrates, prepared by precipitation. Zn(NO3)2.6H2O and NaOH with concentrations, ranged from 0.2 to 3.2M and the initial OH/Zn ratio from 0.5 to 1.6 were used at temperatures from 20 to 60 °C. All samples were characterized in detail by X-ray diffraction, scanning electron microscopy, differential thermal analysis and ICP. Stability and distribution of the zinc hydroxide nitrate particles were estimated too.Keywords: zinc hydroxide nitrate, nanoparticles, preparation, foliar fertilizer
Procedia PDF Downloads 34914549 Mathematical Modelling of Slag Formation in an Entrained-Flow Gasifier
Authors: Girts Zageris, Vadims Geza, Andris Jakovics
Abstract:
Gasification processes are of great interest due to their generation of renewable energy in the form of syngas from biodegradable waste. It is, therefore, important to study the factors that play a role in the efficiency of gasification and the longevity of the machines in which gasification takes place. This study focuses on the latter, aiming to optimize an entrained-flow gasifier by reducing slag formation on its walls to reduce maintenance costs. A CFD mathematical model for an entrained-flow gasifier is constructed – the model of an actual gasifier is rendered in 3D and appropriately meshed. Then, the turbulent gas flow in the gasifier is modeled with the realizable k-ε approach, taking devolatilization, combustion and coal gasification into account. Various such simulations are conducted, obtaining results for different air inlet positions and by tracking particles of varying sizes undergoing devolatilization and gasification. The model identifies potential problematic zones where most particles collide with the gasifier walls, indicating risk regions where ash deposits could most likely form. In conclusion, the effects on the formation of an ash layer of air inlet positioning and particle size allowed in the main gasifier tank are discussed, and possible solutions for decreasing a number of undesirable deposits are proposed. Additionally, an estimate of the impact of different factors such as temperature, gas properties and gas content, and different forces acting on the particles undergoing gasification is given.Keywords: biomass particles, gasification, slag formation, turbulence k-ε modelling
Procedia PDF Downloads 28614548 Protecting the Financial Rights of Non-Member Spouses: Addressing the Exploitation of Retirement Benefits in South African Divorce Law
Authors: Ronelle Prinsloo
Abstract:
In South Africa, married retirement fund members can manipulate the legal framework to prevent their spouses from accessing shared retirement benefits during divorce proceedings. The current legal structure allows retirement fund members to accelerate the accrual of their benefits, often by resigning or purchasing living annuities before the finalization of a divorce. This action effectively places these benefits beyond the reach of their spouses, leading to substantial financial prejudice, particularly for financially weaker spouses, typically women. The research highlights that South African courts, including the Supreme Court of Appeal (SCA), have not adequately scrutinized the implications of these actions. Specifically, the SCA has ruled that the capital and proceeds from living annuities are not subject to division during divorce, which undermines the financial rights of non-member spouses. The court's failure to consider the source of the money used to purchase these annuities and its potential inclusion in the joint estate or accrual system is a significant concern. The South African Law Reform Commission has recognized this issue, noting the negative impact on financially weaker spouses. The article critiques the lack of legislative response to this problem despite its significant implications for the equitable distribution of marital assets. The current legal framework, particularly the definition of "pension interest" and the provisions under sections 7(7) and 7(8) of the Divorce Act, is inadequate in addressing the complexities surrounding the sharing of retirement benefits in divorce cases. The article argues for a comprehensive review and reform of the law to ensure that retirement benefits are treated as patrimonial assets, subject to division upon the occurrence of any trigger event, such as resignation, retirement, or retrenchment. The need for such reform is urgent to prevent economically disadvantaged spouses from being unjustly deprived of their fair share of retirement benefits. In conclusion, the article advocates for legislative amendments to the Divorce Act, specifically section 7(7), to clarify that pension interests automatically form part of the joint estate, regardless of whether divorce proceedings are underway. This change would safeguard the financial rights of non-member spouses and ensure a more equitable distribution of retirement benefits during divorce. Failure to address this issue perpetuates economic inequality and leaves financially weaker spouses vulnerable during divorce proceedings.Keywords: Constitution of South Africa, non-member spouse, retirement benefits, spouse
Procedia PDF Downloads 2014547 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 3914546 Dynamic Modeling of Energy Systems Adapted to Low Energy Buildings in Lebanon
Authors: Nadine Yehya, Chantal Maatouk
Abstract:
Low energy buildings have been developed to achieve global climate commitments in reducing energy consumption. They comprise energy efficient buildings, zero energy buildings, positive buildings and passive house buildings. The reduced energy demands in Low Energy buildings call for advanced building energy modeling that focuses on studying active building systems such as heating, cooling and ventilation, improvement of systems performances, and development of control systems. Modeling and building simulation have expanded to cover different modeling approach i.e.: detailed physical model, dynamic empirical models, and hybrid approaches, which are adopted by various simulation tools. This paper uses DesignBuilder with EnergyPlus simulation engine in order to; First, study the impact of efficiency measures on building energy behavior by comparing Low energy residential model to a conventional one in Beirut-Lebanon. Second, choose the appropriate energy systems for the studied case characterized by an important cooling demand. Third, study dynamic modeling of Variable Refrigerant Flow (VRF) system in EnergyPlus that is chosen due to its advantages over other systems and its availability in the Lebanese market. Finally, simulation of different energy systems models with different modeling approaches is necessary to confront the different modeling approaches and to investigate the interaction between energy systems and building envelope that affects the total energy consumption of Low Energy buildings.Keywords: physical model, variable refrigerant flow heat pump, dynamic modeling, EnergyPlus, the modeling approach
Procedia PDF Downloads 22214545 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 14414544 Hardware Co-Simulation Based Based Direct Torque Control for Induction Motor Drive
Authors: Hanan Mikhael Dawood, Haider Salim, Jafar Al-Wash
Abstract:
This paper presents Proportional-Integral (PI) controller to improve the system performance which gives better torque and flux response. In addition, it reduces the undesirable torque ripple. The conventional DTC controller approach for induction machines, based on an improved torque and stator flux estimator, is implemented using Xilinx System Generator (XSG) for MATLAB/Simulink environment through Xilinx blocksets. The design was achieved in VHDL which is based on a MATLAB/Simulink simulation model. The hardware in the loop results are obtained considering the implementation of the proposed model on the Xilinx NEXYS2 Spartan 3E1200 FG320 Kit.Keywords: induction motor, Direct Torque Control (DTC), Xilinx FPGA, motor drive
Procedia PDF Downloads 62214543 Human Behavior Modeling in Video Surveillance of Conference Halls
Authors: Nour Charara, Hussein Charara, Omar Abou Khaled, Hani Abdallah, Elena Mugellini
Abstract:
In this paper, we present a human behavior modeling approach in videos scenes. This approach is used to model the normal behaviors in the conference halls. We exploited the Probabilistic Latent Semantic Analysis technique (PLSA), using the 'Bag-of-Terms' paradigm, as a tool for exploring video data to learn the model by grouping similar activities. Our term vocabulary consists of 3D spatio-temporal patch groups assigned by the direction of motion. Our video representation ensures the spatial information, the object trajectory, and the motion. The main importance of this approach is that it can be adapted to detect abnormal behaviors in order to ensure and enhance human security.Keywords: activity modeling, clustering, PLSA, video representation
Procedia PDF Downloads 39414542 The Effect of Artificial Intelligence on Banking Development and Progress
Authors: Mina Malak Hanna Saad
Abstract:
New strategies for supplying banking services to the customer have been brought, which include online banking. Banks have begun to recall electronic banking (e-banking) as a manner to replace some conventional department features by means of the usage of the internet as a brand-new distribution channel. A few clients have at least one account at multiple banks and get admission to those debts through online banking. To test their present-day internet worth, customers need to log into each of their debts, get particular statistics, and paint closer to consolidation. Not only is it time-ingesting; however, but it is also a repeatable activity with a certain frequency. To solve this problem, the idea of account aggregation was delivered as a solution. Account consolidation in e-banking as a form of digital banking appears to build stronger dating with clients. An account linking service is usually known as a service that permits customers to manipulate their bank accounts held at exceptional institutions through a common online banking platform that places a high priority on safety and statistics protection. The object affords an outline of the account aggregation approach in e-banking as a distinct carrier in the area of e-banking. The advanced facts generation is becoming a vital thing in the improvement of financial services enterprise, specifically the banking enterprise. It has brought different ways of delivering banking to the purchaser, which includes net Banking. Banks began to study electronic banking (e-banking) as a means to update some of their traditional branch functions and the use of the net as a distribution channel. Some clients have at least multiple accounts throughout banks and get the right of entry to that money owed through the usage of e-banking offerings. To examine the contemporary internet's well-worth position, customers have to log in to each of their money owed, get the information and work on consolidation. This no longer takes sufficient time; however, it is a repetitive interest at a specified frequency. To address this point, an account aggregation idea is brought as an answer. E-banking account aggregation, as one of the e-banking kinds, appeared to construct a more potent dating with clients. Account Aggregation carrier usually refers to a service that allows clients to control their bank bills maintained in one-of-a-kind institutions via a common Internet banking working platform, with an excessive subject to protection and privateness. This paper offers an overview of an e-banking account aggregation technique as a new provider in the e-banking field.Keywords: compatibility, complexity, mobile banking, observation, risk banking technology, Internet banks, modernization of banks, banks, account aggregation, security, enterprise developmente-banking, enterprise development
Procedia PDF Downloads 3614541 Assessing Transition to Renewable Energy for Transportation in Indonesia through Drop-in Biofuel Utilization
Authors: Maslan Lamria, Ralph E. H. Sims, Tatang H. Soerawidjaja
Abstract:
In increasing its self-sufficiency on transportation fuel, Indonesia is currently developing commercial production and use of drop-in biofuel (DBF) from vegetable oil. To maximize the level of success, it is necessary to get insights on how the implementation would develop as well as any important factors. This study assessed the dynamics of transition from existing fossil fuel system to a renewable fuel system, which involves the transition from existing biodiesel to projected DBF. A systems dynamics approach was applied and a model developed to simulate the dynamics of liquid biofuel transition. The use of palm oil feedstock was taken as a case study to assess the projected DBF implementation by 2045. The set of model indicators include liquid fuel self-sufficiency, liquid biofuel share, foreign exchange savings and green-house gas emissions reduction. The model outputs showed that supports on DBF investment and use play an important role in the transition progress. Given assumptions which include application of a maximum level of supports over time, liquid fuel self-sufficiency would be still unfulfilled in which palm biofuel contribution is 0.2. Thus, other types of feedstock such as algae and oil feedstock from marginal lands need to be developed synergically. Regarding support on DBF use, this study recommended that removal of fossil subsidy would be necessary prior to applying a carbon tax policy effectively.Keywords: biofuel, drop-in biofuel, energy transition, liquid fuel
Procedia PDF Downloads 14514540 Long-Term Trends of Sea Level and Sea Surface Temperature in the Mediterranean Sea
Authors: Bayoumy Mohamed, Khaled Alam El-Din
Abstract:
In the present study, 24 years of gridded sea level anomalies (SLA) from satellite altimetry and sea surface temperature (SST) from advanced very-high-resolution radiometer (AVHRR) daily data (1993-2016) are used. These data have been used to investigate the sea level rising and warming rates of SST, and their spatial distribution in the Mediterranean Sea. The results revealed that there is a significant sea level rise in the Mediterranean Sea of 2.86 ± 0.45 mm/year together with a significant warming of 0.037 ± 0.007 °C/year. The high spatial correlation between sea level and SST variations suggests that at least part of the sea level change reported during the period of study was due to heating of surface layers. This indicated that the steric effect had a significant influence on sea level change in the Mediterranean Sea.Keywords: altimetry, AVHRR, Mediterranean Sea, sea level and SST changes, trend analysis
Procedia PDF Downloads 195