Search results for: estimating of trajectory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1092

Search results for: estimating of trajectory

612 Estimating the Probability of Winning the Best Actor/Actress Award Conditional on the Best Picture Nomination with Bayesian Hierarchical Models

Authors: Svetlana K. Eden

Abstract:

Movies and TV shows have long become part of modern culture. We all have our preferred genre, story, actors, and actresses. However, can we objectively discern good acting from the bad? As laymen, we are probably not objective, but what about the Oscar academy members? Are their votes based on objective measures? Oscar academy members are probably also biased due to many factors, including their professional affiliations or advertisement exposure. Heavily advertised films bring more publicity to their cast and are likely to have bigger budgets. Because a bigger budget may also help earn a Best Picture (BP) nomination, we hypothesize that best actor/actress (BA) nominees from BP-nominated movies would have higher chances of winning the award than those BA nominees from non-BP-nominated films. To test this hypothesis, three Bayesian hierarchical models are proposed, and their performance is evaluated. The results from all three models largely support our hypothesis. Depending on the proportion of BP nominations among BA nominees, the odds ratios (estimated over expected) of winning the BA award conditional on BP nomination vary from 2.8 [0.8-7.0] to 4.3 [2.0, 15.8] for actors and from 1.5 [0.0, 12.2] to 5.4 [2.7, 14.2] for actresses.

Keywords: Oscar, best picture, best actor/actress, bias

Procedia PDF Downloads 215
611 Africa as Endemically a War Continent: Explaining the Changing Pattern of Armed Conflicts in Africa

Authors: Kenneth Azaigba

Abstract:

The history of post-colonial African States has been dubbed a history of endemic warfare in existing literature. Indeed, Africa political environment is characterized by a multiplicity of threats to peace and security. Africa's leading drivers of conflict include abundant (especially mineral) resources, personal rule and attendant political authoritarianism, manipulation of identity politics across ethnicity, marginalization of communities, as well as electoral mal-practices resulting in contested legitimacy and resultant violence. However, the character of armed conflicts in Africa is changing. This paper attempts to reconstruct the trajectory of armed conflicts in Africa and explain the changing pattern of armed conflict. The paper contends that large scale political violence in Africa is on the decline rendering the endemic thesis an inappropriate paradigm in explaining political conflicts in Africa. The paper also posits that though small scale conflicts are springing up and exhibiting trans-border dimensions, these patterns of armed conflicts are not peculiar to Africa but emerging waves of global conflicts. The paper explains that the shift in the scale of warfare in Africa is a function of a multiplicity of post-cold war global contradictions. Inclusive governance, social justice and economic security are articulated as workable panaceas for mitigating warfare in Africa.

Keywords: Africa, conflicts, pattern, war

Procedia PDF Downloads 378
610 Evaluating the Influence of Financial Technology (FinTech) on Sustainable Finance: A Comprehensive Global Analysis

Authors: Muhammad Kashif

Abstract:

The primary aim of this paper is to investigate the influence of financial technology (FinTech) on sustainable finance. The sample for this study spans from 2010 to 2021, encompassing data from 89 countries worldwide. The study employed two-stage least squares (2SLS) regression approach with the instrumental variables and validated the findings using a two-step system generalized method of moments (GMM). The findings indicate that fintech has a significant favorable impact on sustainable finance. While other factors such as institutional quality, socio-economic condition, and renewable energy have a significant and beneficial influence on the trajectory of sustainable finance, except globalization's impact is positive but insignificant. Furthermore, fintech is crucial in driving the transition toward a sustainable future characterized by a lower carbon economy. The study found that fintech has extensive application across various sectors of sustainable finance and has substantial potential to create long-term positive effects on sustainable finance. Fintech can integrate extensively with other technologies to facilitate diversified growth in sustainable finance. Additionally, this study highlights fintech-related trends and research opportunities in sustainable finance, showing how these can promote each other worldwide with important policy implications for countries looking to advance sustainable finance through technology.

Keywords: sustainable development goals (SDGs), financial technology (FinTech), genuine savings index (GSI), financial stability index, sustainable finance

Procedia PDF Downloads 117
609 Towards an Intelligent Ontology Construction Cost Estimation System: Using BIM and New Rules of Measurement Techniques

Authors: F. H. Abanda, B. Kamsu-Foguem, J. H. M. Tah

Abstract:

Construction cost estimation is one of the most important aspects of construction project design. For generations, the process of cost estimating has been manual, time-consuming and error-prone. This has partly led to most cost estimates to be unclear and riddled with inaccuracies that at times lead to over- or under-estimation of construction cost. The development of standard set of measurement rules that are understandable by all those involved in a construction project, have not totally solved the challenges. Emerging Building Information Modelling (BIM) technologies can exploit standard measurement methods to automate cost estimation process and improves accuracies. This requires standard measurement methods to be structured in ontologically and machine readable format; so that BIM software packages can easily read them. Most standard measurement methods are still text-based in textbooks and require manual editing into tables or Spreadsheet during cost estimation. The aim of this study is to explore the development of an ontology based on New Rules of Measurement (NRM) commonly used in the UK for cost estimation. The methodology adopted is Methontology, one of the most widely used ontology engineering methodologies. The challenges in this exploratory study are also reported and recommendations for future studies proposed.

Keywords: BIM, construction projects, cost estimation, NRM, ontology

Procedia PDF Downloads 546
608 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 235
607 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 411
606 The Influence of Collaboration on Individual Writing Quality: The Case of Iranian vs. Malaysian Freshers

Authors: Seyed Yasin Yazdi-Amirkhiz, Azirah Hashim

Abstract:

This study purported to comparatively investigate the influence of collaborative writing on the quality of individual writing of four female Iranian and four female Malaysian students. The first semester students at a private university in Malaysia, who were homogeneous in terms of age, gender, study discipline, and language proficiency, were divided into two Iranian and two Malaysian dyads. The dyads performed collaborative writing tasks for 15 sessions; after three consecutive collaborative writing sessions, each participant was asked to individually attempt a writing task. Both collaborative and individual writing tasks comprised isomorphic graphic prompts (IELTS Academic Module task 1). Writing quality of the five individually-produced texts during the study was scored in terms of task achievement (TA), cohesion/coherence (C/C), grammatical range/accuracy (GR/A), and lexical resources (LR). The findings indicated a hierarchy of development in TA and C/C among all the students, while LR showed minor improvement only among three of Malaysian students, and GR/A barely exhibited any progress among all the participants. Intermittent progressions and regressions were also discerned in the trajectory of their writing development. The findings are discussed in the light of the socio-cultural and emergentist perspectives, the typology of tasks used as well as the role of the participants’ level of language proficiency.

Keywords: collaborative writing, writing quality, individual writing, collaboration

Procedia PDF Downloads 452
605 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 271
604 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia

Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling

Procedia PDF Downloads 36
603 Enhancing Transfer Path Analysis with In-Situ Component Transfer Path Analysis for Interface Forces Identification

Authors: Raef Cherif, Houssine Bakkali, Wafaa El Khatiri, Yacine Yaddaden

Abstract:

The analysis of how vibrations are transmitted between components is required in many engineering applications. Transfer path analysis (TPA) has been a valuable engineering tool for solving Noise, Vibration, and Harshness (NVH problems using sub-structuring applications. The most challenging part of a TPA analysis is estimating the equivalent forces at the contact points between the active and the passive side. Component TPA in situ Method calculates these forces by inverting the frequency response functions (FRFs) measured at the passive subsystem, relating the motion at indicator points to forces at the interface. However, matrix inversion could pose problems due to the ill-conditioning of the matrices leading to inaccurate results. This paper establishes a TPA model for an academic system consisting of two plates linked by four springs. A numerical study has been performed to improve the interface forces identification. Several parameters are studied and discussed, such as the singular value rejection and the number and position of indicator points chosen and used in the inversion matrix.

Keywords: transfer path analysis, matrix inverse method, indicator points, SVD decomposition

Procedia PDF Downloads 78
602 Comparison of Statistical Methods for Estimating Missing Precipitation Data in the River Subbasin Lenguazaque, Colombia

Authors: Miguel Cañon, Darwin Mena, Ivan Cabeza

Abstract:

In this work was compared and evaluated the applicability of statistical methods for the estimation of missing precipitations data in the basin of the river Lenguazaque located in the departments of Cundinamarca and Boyacá, Colombia. The methods used were the method of simple linear regression, distance rate, local averages, mean rates, correlation with nearly stations and multiple regression method. The analysis used to determine the effectiveness of the methods is performed by using three statistical tools, the correlation coefficient (r2), standard error of estimation and the test of agreement of Bland and Altmant. The analysis was performed using real rainfall values removed randomly in each of the seasons and then estimated using the methodologies mentioned to complete the missing data values. So it was determined that the methods with the highest performance and accuracy in the estimation of data according to conditions that were counted are the method of multiple regressions with three nearby stations and a random application scheme supported in the precipitation behavior of related data sets.

Keywords: statistical comparison, precipitation data, river subbasin, Bland and Altmant

Procedia PDF Downloads 463
601 Wind Wave Modeling Using MIKE 21 SW Spectral Model

Authors: Pouya Molana, Zeinab Alimohammadi

Abstract:

Determining wind wave characteristics is essential for implementing projects related to Coastal and Marine engineering such as designing coastal and marine structures, estimating sediment transport rates and coastal erosion rates in order to predict significant wave height (H_s), this study applies the third generation spectral wave model, Mike 21 SW, along with CEM model. For SW model calibration and verification, two data sets of meteorology and wave spectroscopy are used. The model was exposed to time-varying wind power and the results showed that difference ratio mean, standard deviation of difference ratio and correlation coefficient in SW model for H_s parameter are 1.102, 0.279 and 0.983, respectively. Whereas, the difference ratio mean, standard deviation and correlation coefficient in The Choice Experiment Method (CEM) for the same parameter are 0.869, 1.317 and 0.8359, respectively. Comparing these expected results it is revealed that the Choice Experiment Method CEM has more errors in comparison to MIKE 21 SW third generation spectral wave model and higher correlation coefficient does not necessarily mean higher accuracy.

Keywords: MIKE 21 SW, CEM method, significant wave height, difference ratio

Procedia PDF Downloads 394
600 Comparative Assessment of a Distributed Model and a Lumped Model for Estimating of Sediments Yielding in Small Urban Areas

Authors: J.Zambrano Nájera, M.Gómez Valentín

Abstract:

Increases in urbanization during XX century, have brought as one major problem the increased of sediment production. Hydraulic erosion is one of the major causes of increasing of sediments in small urban catchments. Such increments in sediment yielding in header urban catchments can caused obstruction of drainage systems, making impossible to capture urban runoff, increasing runoff volumes and thus exacerbating problems of urban flooding. For these reasons, it is more and more important to study of sediment production in urban watershed for properly analyze and solve problems associated to sediments. The study of sediments production has improved with the use of mathematical modeling. For that reason, it is proposed a new physically based model applicable to small header urban watersheds that includes the advantages of distributed physically base models, but with more realistic data requirements. Additionally, in this paper the model proposed is compared with a lumped model, reviewing the results, the advantages and disadvantages between the both of them.

Keywords: erosion, hydrologic modeling, urban runoff, sediment modeling, sediment yielding, urban planning

Procedia PDF Downloads 340
599 Craniopharyngiomas: Surgical Techniques: The Combined Interhemispheric Sub-Commissural Translaminaterminalis Approach to Tumors in and Around the Third Ventricle: Neurological and Functional Outcome

Authors: Pietro Mortini, Marco Losa

Abstract:

Objective: Resection of large lesions growing into the third ventricle remains a demanding surgery, sometimes at risk of severe post-operative complications. Transcallosal and transcortical routes were considered as approaches of choice to access the third ventricle, however neurological consequences like memory loss have been reported. We report clinical results of the previously described combined interhemispheric sub-commissural translaminaterminalis approach (CISTA) for the resection of large lesions located in the third ventricle. Methods: Authors conducted a retrospective analysis on 10 patients, who were operated through the CISTA, for the resection of lesions growing into the third ventricle. Results: Total resection was achieved in all cases. Cognitive worsening occurred only in one case. No perioperative deaths were recorded and, at last follow-up, all patients were alive. One year after surgery 80% of patients had an excellent outcome with a KPS 100 and Glasgow Outcome score (GOS) Conclusion: The CISTA represents a safe and effective alternative to transcallosal and transcortical routes to resect lesions growing into the third ventricle. It allows for a multiangle trajectory to access the third ventricle with a wide working area free from critical neurovascular structures, without any section of the corpus callosum, the anterior commissure and the fornix.

Keywords: craniopharingioma, surgery, sub-commissural translaminaterminalis approach (CISTA),

Procedia PDF Downloads 289
598 Influence of Physical Properties on Estimation of Mechanical Strength of Limestone

Authors: Khaled Benyounes

Abstract:

Determination of the rock mechanical properties such as unconfined compressive strength UCS, Young’s modulus E, and tensile strength by the Brazilian test Rtb is considered to be the most important component in drilling and mining engineering project. Research related to establishing correlation between strength and physical parameters of rocks has always been of interest to mining and reservoir engineering. For this, many rock blocks of limestone were collected from the quarry located in Meftah(Algeria), the cores were crafted in the laboratory using a core drill. This work examines the relationships between mechanical properties and some physical properties of limestone. Many empirical equations are established between UCS and physical properties of limestone (such as dry bulk density, velocity of P-waves, dynamic Young’s modulus, alteration index, and total porosity). Others correlations UCS-tensile strength, dynamic Young’s modulus-static Young’s modulus have been find. Based on the Mohr-Coulomb failure criterion, we were able to establish mathematical relationships that will allow estimating the cohesion and internal friction angle from UCS and indirect tensile strength. Results from this study can be useful for mining industry for resolve range of geomechanical problems such as slope stability.

Keywords: limestone, mechanical strength, Young’s modulus, porosity

Procedia PDF Downloads 443
597 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 493
596 A Fast Algorithm for Electromagnetic Compatibility Estimation for Radio Communication Network Equipment in a Complex Electromagnetic Environment

Authors: C. Temaneh-Nyah

Abstract:

Electromagnetic compatibility (EMC) is the ability of a Radio Communication Equipment (RCE) to operate with a desired quality of service in a given Electromagnetic Environment (EME) and not to create harmful interference with other RCE. This paper presents an algorithm which improves the simulation speed of estimating EMC of RCE in a complex EME, based on a stage by stage frequency-energy criterion of filtering. This algorithm considers different interference types including: Blocking and intermodulation. It consist of the following steps: simplified energy criterion where filtration is based on comparing the free space interference level to the industrial noise, frequency criterion which checks whether the interfering emissions characteristic overlap with the receiver’s channels characteristic and lastly the detailed energy criterion where the real channel interference level is compared to the noise level. In each of these stages, some interference cases are filtered out by the relevant criteria. This reduces the total number of dual and different combinations of RCE involved in the tedious detailed energy analysis and thus provides an improved simulation speed.

Keywords: electromagnetic compatibility, electromagnetic environment, simulation of communication network

Procedia PDF Downloads 214
595 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)

Authors: Longqing Li

Abstract:

The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.

Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting

Procedia PDF Downloads 316
594 Presentation of a Mix Algorithm for Estimating the Battery State of Charge Using Kalman Filter and Neural Networks

Authors: Amin Sedighfar, M. R. Moniri

Abstract:

Determination of state of charge (SOC) in today’s world becomes an increasingly important issue in all the applications that include a battery. In fact, estimation of the SOC is a fundamental need for the battery, which is the most important energy storage in Hybrid Electric Vehicles (HEVs), smart grid systems, drones, UPS and so on. Regarding those applications, the SOC estimation algorithm is expected to be precise and easy to implement. This paper presents an online method for the estimation of the SOC of Valve-Regulated Lead Acid (VRLA) batteries. The proposed method uses the well-known Kalman Filter (KF), and Neural Networks (NNs) and all of the simulations have been done with MATLAB software. The NN is trained offline using the data collected from the battery discharging process. A generic cell model is used, and the underlying dynamic behavior of the model has used two capacitors (bulk and surface) and three resistors (terminal, surface, and end), where the SOC determined from the voltage represents the bulk capacitor. The aim of this work is to compare the performance of conventional integration-based SOC estimation methods with a mixed algorithm. Moreover, by containing the effect of temperature, the final result becomes more accurate. 

Keywords: Kalman filter, neural networks, state-of-charge, VRLA battery

Procedia PDF Downloads 186
593 The Experiences of Agency in the Utilization of Twitter for English Language Learning in a Saudi EFL Context

Authors: Fahd Hamad Alqasham

Abstract:

This longitudinal study investigates Saudi students’ use trajectory and experiences of Twitter as an innovative tool for in-class learning of the English language in a Saudi tertiary English as a foreign language (EFL) context for a 12-week semester. The study adopted van Lier’s agency theory (2008, 2010) as the analytical framework to obtain an in-depth analysis of how the learners’ could utilize Twitter to create innovative ways for them to engage in English learning inside the language classroom. The study implemented a mixed methods approach, including six data collection instruments consisting of a research log, observations, focus group participation, initial and post-project interviews, and a post-project questionnaire. The study was conducted at Qassim University, specifically at Preparatory Year Program (PYP) on the main campus. The sample included 25 male students studying in the first level of PYP. The findings results revealed that although Twitter’s affordances initially paled a crucial role in motivating the learners to initiate their agency inside the classroom to learn English, the contextual constraints, mainly anxiety, the university infrastructure, and the teacher’s role negatively influenced the sustainability of Twitter’s use past week nine of its implementation.

Keywords: CALL, agency, innovation, EFL, language learning

Procedia PDF Downloads 69
592 Evaluation of Nutrition Supplement on Body Composition during Catch-Up Growth, in a Pre-Clinical Model of Growth Restriction

Authors: Bindya Jacob

Abstract:

The aim of the present study was to assess the quality of catchup growth induced by Oral Nutrition Supplement (ONS), in animal model of growth restriction due to under nutrition. Quality of catch-up growth was assessed by proportion of lean body mass (LBM) and fat mass (FM). Young SD rats were food restricted at 70% of normal caloric intake for 4 weeks; and re-fed at 120% of normal caloric intake for 4 weeks. Refeeding diet had 50% calories from animal diet and 50% from ONS formulated for optimal growth. After refeeding, the quantity and quality of catch-up growth were measured including weight, length, LBM and FM. During nutrient restriction, body weight and length of animals was reduced compared to healthy controls. Both LBM and FM were significantly lower than healthy controls (p < 0.001). Refeeding with ONS resulted in increase of weight and length, with significant catch-up growth compared to baseline (p < 0.001). Detailed examination of body composition showed that the catch-up in body weight was due to proportionate increase of LBM and FM, resulting in a final body composition similar to healthy controls. This data supports the use of well-designed ONS for recovery from growth restriction due to under nutrition, and return to normal growth trajectory characterized by normal ratio of lean and fat mass.

Keywords: catch up growth, body composition, nutrient restriction, healthy growth

Procedia PDF Downloads 431
591 Meeting India's Energy Demand: U.S.-India Energy Cooperation under Trump

Authors: Merieleen Engtipi

Abstract:

India's total share of global population is nearly 18%; however, its per capita energy consumption is only one-third of global average. The demand and supply of electricity are uneven in the country; around 240 million of the population have no access to electricity. However, with India's trajectory for modernisation and economic growth, the demand for energy is only expected to increase. India is at a crossroad, on the one hand facing the increasing demand for energy and on the other hand meeting the Paris climate policy commitments, and further the struggle to provide efficient energy. This paper analyses the policies to meet India’s need for energy, as the per capita energy consumption is likely to be double in 6-7 years period. Simultaneously, India's Paris commitment requires curbing of carbon emission from fossil fuels. There is an increasing need for renewables to be cheaply and efficiently available in the market and for clean technology to extract fossil fuels to meet climate policy goals. Fossil fuels are the most significant generator of energy in India; with the Paris agreement, the demand for clean energy technology is increasing. Finally, the U.S. decided to withdraw from the Paris Agreement; however, the two countries plan to continue engaging bilaterally on energy issues. The U.S. energy cooperation under Trump administration is significantly vital for greater energy security, transfer of technology and efficiency in energy supply and demand.

Keywords: energy demand, energy cooperation, fossil fuels, technology transfer

Procedia PDF Downloads 248
590 Software Reliability Prediction Model Analysis

Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability

Procedia PDF Downloads 460
589 Study of Electron Cyclotron Resonance Acceleration by Cylindrical TE₀₁₁ Mode

Authors: Oswaldo Otero, Eduardo A. Orozco, Ana M. Herrera

Abstract:

In this work, we present results from analytical and numerical studies of the electron acceleration by a TE₀₁₁ cylindrical microwave mode in a static homogeneous magnetic field under electron cyclotron resonance (ECR) condition. The stability of the orbits is analyzed using the particle orbit theory. In order to get a better understanding of the interaction wave-particle, we decompose the azimuthally electric field component as the superposition of right and left-hand circular polarization standing waves. The trajectory, energy and phase-shift of the electron are found through a numerical solution of the relativistic Newton-Lorentz equation in a finite difference method by the Boris method. It is shown that an electron longitudinally injected with an energy of 7 keV in a radial position r=Rc/2, being Rc the cavity radius, is accelerated up to energy of 90 keV by an electric field strength of 14 kV/cm and frequency of 2.45 GHz. This energy can be used to produce X-ray for medical imaging. These results can be used as a starting point for study the acceleration of electrons in a magnetic field changing slowly in time (GYRAC), which has some important applications as the electron cyclotron resonance ion proton accelerator (ECR-IPAC) for cancer therapy and to control plasma bunches with relativistic electrons.

Keywords: Boris method, electron cyclotron resonance, finite difference method, particle orbit theory, X-ray

Procedia PDF Downloads 154
588 The Revealed Preference Methods in Economic Valuation of Environmental Goods: A Review

Authors: Sara Sousa

Abstract:

The environmental goods and services have often been neglected in crucial decisions affecting the environment mainly because the difficulty in estimating their economic value, since we are dealing with non-market goods and, thus, without a price associated. Nevertheless, the inexistence of prices does not necessarily mean these goods have no value. The environment is a key element in today's society that seeks to be as sustainable as possible, where the environmental assets have both use and non-use values. To estimate the use value, researchers may apply the revealed preference methods. This paper provides a theoretical review of the main concepts and methodologies on the economic valuation of the environment, with particular emphasis on the revealed preference techniques. Based on a detailed literature review, this study concludes that, despite some inherent limitations, the revealed preference methodologies – travel cost, hedonic price, and averting behaviour – represent essential tools for the researchers who accept the challenge to estimate the use value of environmental goods and services based on the actual individuals` behaviour. The main purpose of this study is to contribute to an increased theoretical information on the economic valuation of environmental assets, allowing researchers and policymakers to improve future decisions regarding the environment.

Keywords: economic valuation, environmental goods, revealed preference methods, total economic value

Procedia PDF Downloads 125
587 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test

Authors: Yuri V. Kim

Abstract:

This article presents a new approach to the Functional Testing of Space Systems (SS). It can be considered as a generic test and used for a wide class of SS that from the point of view of System Dynamics and Control may be described by the ordinary differential equations. Suggested methodology is based on using semi-natural experiment- laboratory stand that doesn’t require complicated, precise and expensive technological control-verification equipment. However, it allows for testing system as a whole totally assembled unit during Assembling, Integration and Testing (AIT) activities, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data is then inserted in laboratory PC where it is post-experiment processed by Matlab/Simulink Identification Toolbox. It allows for estimating system dynamics in form of estimation of system differential equations by the experimental way and comparing them with expected mathematical model prematurely verified by mathematical simulation during the design process.

Keywords: system dynamics, space system ground tests and space qualification, system dynamics identification, satellite attitude control, assembling, integration and testing

Procedia PDF Downloads 157
586 Dynamics of India's Nuclear Identity

Authors: Smita Singh

Abstract:

Through the constructivist perspective, this paper explores the transformation of India’s nuclear identity from an irresponsible nuclear weapon power to a ‘de-facto nuclear power’ in the emerging international nuclear order From a nuclear abstainer to a bystander and finally as a ‘de facto nuclear weapon state’, India has put forth its case as a unique and exceptional nuclear power as opposed to Iran, Iraq and North Korea with similar nuclear ambitions, who have been snubbed as ‘rogue states’ by the international community. This paper investigates the reasons behind international community’s gradual acceptance of India’s nuclear weapons capabilities and nuclear identity after the Indo-U.S. Nuclear Deal. In this paper, the central concept of analysis is the inter-subjective nature of identity in the nuclear arena. India’s nuclear behaviour has been discursively constituted by India through evolving images of the ‘self’ and the ‘other.’ India’s sudden heightened global status is not solely the consequence of its 1998 nuclear tests but a calibrated projection as a responsible stakeholder in other spheres such as economic potential, market prospects, democratic credentials and so on. By examining India’s nuclear discourse this paper contends that India has used its material and discursive power in presenting a n striking image as a responsible nuclear weapon power (though not yet a legal nuclear weapon state as per the NPT). By historicising India’s nuclear trajectory through an inter-subjective analysis of identities, this paper moves a step ahead in providing a theoretical interpretation of state actions and nuclear identity construction.

Keywords: nuclear identity, India, constructivism, international stakeholder

Procedia PDF Downloads 433
585 Ultra-High Frequency Passive Radar Coverage for Cars Detection in Semi-Urban Scenarios

Authors: Pedro Gómez-del-Hoyo, Jose-Luis Bárcena-Humanes, Nerea del-Rey-Maestre, María-Pilar Jarabo-Amores, David Mata-Moya

Abstract:

A study of achievable coverages using passive radar systems in terrestrial traffic monitoring applications is presented. The study includes the estimation of the bistatic radar cross section of different commercial vehicle models that provide challenging low values which make detection really difficult. A semi-urban scenario is selected to evaluate the impact of excess propagation losses generated by an irregular relief. A bistatic passive radar exploiting UHF frequencies radiated by digital video broadcasting transmitters is assumed. A general method of coverage estimation using electromagnetic simulators in combination with estimated car average bistatic radar cross section is applied. In order to reduce the computational cost, hybrid solution is implemented, assuming free space for the target-receiver path but estimating the excess propagation losses for the transmitter-target one.

Keywords: bistatic radar cross section, passive radar, propagation losses, radar coverage

Procedia PDF Downloads 327
584 Rationalized Haar Transforms Approach to Design of Observer for Control Systems with Unknown Inputs

Authors: Joon-Hoon Park

Abstract:

The fundamental concept of observability is important in both theoretical and practical points of modern control systems. In modern control theory, a control system has criteria for determining the design solution exists for the system parameters and design objectives. The idea of observability relates to the condition of observing or estimating the state variables from the output variables that is generally measurable. To design closed-loop control system, the practical problems of implementing the feedback of the state variables must be considered and implementing state feedback control problem has been existed in this case. All the state variables are not available, so it is requisite to design and implement an observer that will estimate the state variables form the output parameters. However sometimes unknown inputs are presented in control systems as practical cases. This paper presents a design method and algorithm for observer of control system with unknown input parameters based on Rationalized Haar transform. The proposed method is more advantageous than the other numerical method.

Keywords: orthogonal functions, rationalized Haar transforms, control system observer, algebraic method

Procedia PDF Downloads 363
583 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners

Authors: Pi-Hsia Hung

Abstract:

The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.

Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity

Procedia PDF Downloads 240