Search results for: arrival time prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19837

Search results for: arrival time prediction

19057 An Online Priority-Configuration Algorithm for Obstacle Avoidance of the Unmanned Air Vehicles Swarm

Authors: Lihua Zhu, Jianfeng Du, Yu Wang, Zhiqiang Wu

Abstract:

Collision avoidance problems of a swarm of unmanned air vehicles (UAVs) flying in an obstacle-laden environment are investigated in this paper. Given that the UAV swarm needs to adapt to the obstacle distribution in dynamic operation, a priority configuration is designed to guide the UAVs to pass through the obstacles in turn. Based on the collision cone approach and the prediction of the collision time, a collision evaluation model is established to judge the urgency of the imminent collision of each UAV, and the evaluation result is used to assign the priority of each UAV to further instruct them going through the obstacles in descending order. At last, the simulation results provide the promising validation in terms of the efficiency and scalability of the proposed approach.

Keywords: UAV swarm, collision avoidance, complex environment, online priority design

Procedia PDF Downloads 214
19056 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 345
19055 Two Day Ahead Short Term Load Forecasting Neural Network Based

Authors: Firas M. Tuaimah

Abstract:

This paper presents an Artificial Neural Network based approach for short-term load forecasting and exactly for two days ahead. Two seasons have been discussed for Iraqi power system, namely summer and winter; the hourly load demand is the most important input variables for ANN based load forecasting. The recorded daily load profile with a lead time of 1-48 hours for July and December of the year 2012 was obtained from the operation and control center that belongs to the Ministry of Iraqi electricity. The results of the comparison show that the neural network gives a good prediction for the load forecasting and for two days ahead.

Keywords: short-term load forecasting, artificial neural networks, back propagation learning, hourly load demand

Procedia PDF Downloads 464
19054 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites

Authors: Yung-Chung Chuang

Abstract:

The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.

Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics

Procedia PDF Downloads 142
19053 Impact of Mucormycosis Infection In Limb Salvage for Trauma Patients

Authors: Katie-Beth Webster

Abstract:

Mucormycosis is a rare opportunistic fungal infection that, if left untreated, can cause large scale tissue necrosis and death. There are a number of cases of this in the literature, most commonly in the head and neck region arising from sinuses. It is also usually found in immunocompromised patient subgroups. This study reviewed a number of cases of mucormycosis in previously fit and healthy young trauma patients to assess predisposing factors for infection and adequacy of current treatment paradigms. These trauma patients likely contracted the fungal infection from the soil at the site of the incident. Despite early washout and debridement of the wounds at the scene of the injury and on arrival in hospital, both these patients contracted mucormycosis. It was suspected that inadequate early debridement of soil contaminated limbs was one of the major factors that can lead to catastrophic tissue necrosis. In both cases, this resulted in the patients having a higher level of amputation than would have initially been required based on the level of their injury. This was secondary to cutaneous and soft tissue necrosis secondary to the fungal infiltration leading to osteomyelitis and systemic sepsis. In the literature, it appears diagnosis is often protracted in this condition secondary to inadequate early treatment and long processing times for fungal cultures. If fungal cultures were sent at the time of first assessment and adequate debridements are performed aggressively early, it could lead to these critically unwell trauma patients receiving appropriate antifungal and surgical treatment earlier in their episode of care. This is likely to improve long term outcomes for these patients.

Keywords: mucormycosis, plastic surgery, osteomyelitis, trauma

Procedia PDF Downloads 208
19052 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 106
19051 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model

Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović

Abstract:

Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.

Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve

Procedia PDF Downloads 334
19050 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 111
19049 Perceptions of Chinese Top-up Students Transitioning through a Regional UK University: A Longitudinal Study Using the U-Curve Model

Authors: Xianghan O'Dea

Abstract:

This article argues an urgent need to better understand the personal experiences of Chinese top-up students studying in the UK since the number of Chinese students taking year-long top-up programmes in the UK has risen rapidly in recent years. This lack of knowledge could potentially have implications for the reputation of some UK institutions and also the attractiveness of the UK higher education sector to future international students. This longitudinal study explored the academic and social experiences of twelve Chinese top-up students in a UK institution in-depth and revealed that the students felt their experiences were influenced significantly by their surrounding contexts at the macro and meso levels, which, however, have been largely overlooked in existing research. This article suggests the importance of improving the communications between the partner institutions in China and the UK, and also providing sufficient pre-departure and after arrival support to Chinese top-up students at the institutional level.

Keywords: articulation agreements, Chinese top-up students, top-up programmes, U-curve

Procedia PDF Downloads 170
19048 The Effectiveness of Conflict Management of Factories' Employee in Thailand

Authors: Pacharaporn Lekyan

Abstract:

The purpose of this study is to explore the conflict management affecting the workplace and analyze the ability of the prediction of leadership of the headman and the methods to handle the conflict in an organization. The quantitative research and developed the questionnaire in order to collect information from the respondents from 200 samples from leader or manager who worked in frozen food factories in Thailand. The result analysis shows about the problem of the relationship between conflict management factors, leadership, and the confliction in organization. The emotion of the leader in the organization is not the only factor that can affect conflict management but also the emotion of surrounding people which this factor can happen all the time and shows that four out of five factors of interpersonal conflict management have affected on emotion intelligence and also shows that the behaviors of leadership have an influence on conflict management.

Keywords: conflict management, emotional intelligence, leadership, factories' employee

Procedia PDF Downloads 365
19047 EMI Radiation Prediction and Final Measurement Process Optimization by Neural Network

Authors: Hussam Elias, Ninovic Perez, Holger Hirsch

Abstract:

The completion of the EMC regulations worldwide is growing steadily as the usage of electronics in our daily lives is increasing more than ever. In this paper, we introduce a novel method to perform the final phase of Electromagnetic compatibility (EMC) measurement and to reduce the required test time according to the norm EN 55032 by using a developed tool and the conventional neural network(CNN). The neural network was trained using real EMC measurements, which were performed in the Semi Anechoic Chamber (SAC) by CETECOM GmbH in Essen, Germany. To implement our proposed method, we wrote software to perform the radiated electromagnetic interference (EMI) measurements and use the CNN to predict and determine the position of the turntable that meets the maximum radiation value.

Keywords: conventional neural network, electromagnetic compatibility measurement, mean absolute error, position error

Procedia PDF Downloads 200
19046 Air Dispersion Modeling for Prediction of Accidental Emission in the Atmosphere along Northern Coast of Egypt

Authors: Moustafa Osman

Abstract:

Modeling of air pollutants from the accidental release is performed for quantifying the impact of industrial facilities into the ambient air. The mathematical methods are requiring for the prediction of the accidental scenario in probability of failure-safe mode and analysis consequences to quantify the environmental damage upon human health. The initial statement of mitigation plan is supporting implementation during production and maintenance periods. In a number of mathematical methods, the flow rate at which gaseous and liquid pollutants might be accidentally released is determined from various types in term of point, line and area sources. These emissions are integrated meteorological conditions in simplified stability parameters to compare dispersion coefficients from non-continuous air pollution plumes. The differences are reflected in concentrations levels and greenhouse effect to transport the parcel load in both urban and rural areas. This research reveals that the elevation effect nearby buildings with other structure is higher 5 times more than open terrains. These results are agreed with Sutton suggestion for dispersion coefficients in different stability classes.

Keywords: air pollutants, dispersion modeling, GIS, health effect, urban planning

Procedia PDF Downloads 374
19045 Multi-Faceted Growth in Creative Industries

Authors: Sanja Pfeifer, Nataša Šarlija, Marina Jeger, Ana Bilandžić

Abstract:

The purpose of this study is to explore the different facets of growth among micro, small and medium-sized firms in Croatia and to analyze the differences between models designed for all micro, small and medium-sized firms and those in creative industries. Three growth prediction models were designed and tested using the growth of sales, employment and assets of the company as dependent variables. The key drivers of sales growth are: prudent use of cash, industry affiliation and higher share of intangible assets. Growth of assets depends on retained profits, internal and external sources of financing, as well as industry affiliation. Growth in employment is closely related to sources of financing, in particular, debt and it occurs less frequently than growth in sales and assets. The findings confirm the assumption that growth strategies of small and medium-sized enterprises (SMEs) in creative industries have specific differences in comparison to SMEs in general. Interestingly, only 2.2% of growing enterprises achieve growth in employment, assets and sales simultaneously.

Keywords: creative industries, growth prediction model, growth determinants, growth measures

Procedia PDF Downloads 332
19044 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
19043 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 79
19042 Economic Implications of the Arrival of Syrian Refugees in Jordan

Authors: Ammar Z. Alwrekiat, Sara Ojeda Gonzalez, Maria Jose Miranda Martel, Antonio Mihi-Ramirez

Abstract:

This paper analyses the economic situation in Jordan, which has been the political asylum destination for Syrians since 2011. We analyze the effects of the Jordanian situation through the following indicators: international aid, gross domestic product, remittances, and unemployment. A correlation analysis has been used to identify the main connections of these parameters with the reception of refugees. Although the economic effects of Syrian refugees in Jordan are uncertain, it involves an important challenge in the development of migration policies. Jordan has a special economic situation and limited capacities, but the country has provided humanitarian assistance to Syrian refugees. In this case, the support of the international community is of particular importance, taking an important role in the negotiation of international agreements on refugees.

Keywords: correlation analysis, economic implications, migration, refugees

Procedia PDF Downloads 252
19041 Using Nonhomogeneous Poisson Process with Compound Distribution to Price Catastrophe Options

Authors: Rong-Tsorng Wang

Abstract:

In this paper, we derive a pricing formula for catastrophe equity put options (or CatEPut) with non-homogeneous loss and approximated compound distributions. We assume that the loss claims arrival process is a nonhomogeneous Poisson process (NHPP) representing the clustering occurrences of loss claims, the size of loss claims is a sequence of independent and identically distributed random variables, and the accumulated loss distribution forms a compound distribution and is approximated by a heavy-tailed distribution. A numerical example is given to calibrate parameters, and we discuss how the value of CatEPut is affected by the changes of parameters in the pricing model we provided.

Keywords: catastrophe equity put options, compound distributions, nonhomogeneous Poisson process, pricing model

Procedia PDF Downloads 167
19040 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 57
19039 Using Log Files to Improve Work Efficiency

Authors: Salman Hussam

Abstract:

As a monitoring system to manage employees' time and employers' business, this system (logger) will monitor the employees at work and will announce them if they spend too much time on social media (even if they are using proxy it will catch them). In this way, people will spend less time at work and more time with family.

Keywords: clients, employees, employers, family, monitoring, systems, social media, time

Procedia PDF Downloads 494
19038 Comparison of Different Intraocular Lens Power Calculation Formulas in People With Very High Myopia

Authors: Xia Chen, Yulan Wang

Abstract:

purpose: To compare the accuracy of Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, Emmetropia Verifying Optical (EVO) and Kane for intraocular lens power calculation in patients with axial length (AL) ≥ 28 mm. Methods: In this retrospective single-center study, 50 eyes of 41 patients with AL ≥ 28 mm that underwent uneventful cataract surgery were enrolled. The actual postoperative refractive results were compared to the predicted refraction calculated with different formulas (Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, EVO and Kane). The mean absolute prediction errors (MAE) 1 month postoperatively were compared. Results: The MAE of different formulas were as follows: Haigis (0.509), SRK/T (0.705), T2 (0.999), Holladay 1 (0.714), Hoffer Q (0.583), Barrett Universal II (0.552), EVO (0.463) and Kane (0.441). No significant difference was found among the different formulas (P = .122). The Kane and EVO formulas achieved the lowest level of mean prediction error (PE) and median absolute error (MedAE) (p < 0.05). Conclusion: The Kane and EVO formulas had a better success rate than others in predicting IOL power in high myopic eyes with AL longer than 28 mm in this study.

Keywords: cataract, power calculation formulas, intraocular lens, long axial length

Procedia PDF Downloads 83
19037 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 583
19036 Time Management in the Public Sector in Nigeria

Authors: Sunny Ewankhiwimen Aigbomian

Abstract:

Time, is a scarce resource and in everything we do, time is required to accomplish any given task. The need for this presentation is predicated on the way majority of Nigerian especially in the public sector operators see “Time Management”. Time as resources cannot be regained if lost or managed badly. As a significant aspect of human life it should be handled with diligence and utmost seriousness if the public sector is to function as a coordinated entity. In our homes, private life and offices, we schedule different things to ensure that some things do not go the unexpected. When it comes to service delivery on the part of government, it ought to be more serious because government is all about effect and efficient service delivery and “Time” is a significant variable necessary to successful accomplishment. The need for Nigerian government to re-examine time management in her public sector with a view of repositioning the sector to be able to compete well with other public sectors in the world. The peculiarity of Time management in Public Sector in Nigerian context as examined and some useful recommendations of immerse assistance proffered.

Keywords: Nigeria, public sector, time management, task

Procedia PDF Downloads 99
19035 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
19034 Diagnostic and Prognostic Use of Kinetics of Microrna and Cardiac Biomarker in Acute Myocardial Infarction

Authors: V. Kuzhandai Velu, R. Ramesh

Abstract:

Background and objectives: Acute myocardial infarction (AMI) is the most common cause of mortality and morbidity. Over the last decade, microRNAs (miRs) have emerged as a potential marker for detecting AMI. The current study evaluates the kinetics and importance of miRs in the differential diagnosis of ST-segment elevated MI (STEMI) and non-STEMI (NSTEMI) and its correlation to conventional biomarkers and to predict the immediate outcome of AMI for arrhythmias and left ventricular (LV) dysfunction. Materials and Method: A total of 100 AMI patients were recruited for the study. Routine cardiac biomarker and miRNA levels were measured during diagnosis and serially at admission, 6, 12, 24, and 72hrs. The baseline biochemical parameters were analyzed. The expression of miRs was compared between STEMI and NSTEMI at different time intervals. Diagnostic utility of miR-1, miR-133, miR-208, and miR-499 levels were analyzed by using RT-PCR and with various diagnostics statistical tools like ROC, odds ratio, and likelihood ratio. Results: The miR-1, miR-133, and miR-499 showed peak concentration at 6 hours, whereas miR-208 showed high significant differences at all time intervals. miR-133 demonstrated the maximum area under the curve at different time intervals in the differential diagnosis of STEMI and NSTEMI which was followed by miR-499 and miR-208. Evaluation of miRs for predicting arrhythmia and LV dysfunction using admission sample demonstrated that miR-1 (OR = 8.64; LR = 1.76) and miR-208 (OR = 26.25; LR = 5.96) showed maximum odds ratio and likelihood respectively. Conclusion: Circulating miRNA showed a highly significant difference between STEMI and NSTEMI in AMI patients. The peak was much earlier than the conventional biomarkers. miR-133, miR-208, and miR-499 can be used in the differential diagnosis of STEMI and NSTEMI, whereas miR-1 and miR-208 could be used in the prediction of arrhythmia and LV dysfunction, respectively.

Keywords: myocardial infarction, cardiac biomarkers, microRNA, arrhythmia, left ventricular dysfunction

Procedia PDF Downloads 128
19033 Prediction of Critical Flow Rate in Tubular Heat Exchangers for the Onset of Damaging Flow-Induced Vibrations

Authors: Y. Khulief, S. Bashmal, S. Said, D. Al-Otaibi, K. Mansour

Abstract:

The prediction of flow rates at which the vibration-induced instability takes place in tubular heat exchangers due to cross-flow is of major importance to the performance and service life of such equipment. In this paper, the semi-analytical model for square tube arrays was extended and utilized to study the triangular tube patterns. A laboratory test rig with instrumented test section is used to measure the fluidelastic coefficients to be used for tuning the mathematical model. The test section can be made of any bundle pattern. In this study, two test sections were constructed for both the normal triangular and the rotated triangular tube arrays. The developed scheme is utilized in predicting the onset of flow-induced instability in the two triangular tube arrays. The results are compared to those obtained for two other bundle configurations. The results of the four different tube patterns are viewed in the light of TEMA predictions. The comparison demonstrated that TEMA guidelines are more conservative in all configurations considered

Keywords: fluid-structure interaction, cross-flow, heat exchangers,

Procedia PDF Downloads 277
19032 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement

Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura

Abstract:

The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.

Keywords: big data, dashboards, floating population, smart city, urban management solutions

Procedia PDF Downloads 287
19031 Slugging Frequency Correlation for High Viscosity Oil-Gas Flow in Horizontal Pipeline

Authors: B. Y. Danjuma, A. Archibong-Eso, Aliyu M. Aliyu, H. Yeung

Abstract:

In this experimental investigation, a new data for slugging frequency for high viscosity oil-gas flow are reported. Scale experiments were carried out using a mixture of air and mineral oil as the liquid phase in a 17 m long horizontal pipe with 0.0762 ID. The data set was acquired using two high-speed Gamma Densitometers at a data acquisition frequency of 250 Hz over a time interval of 30 seconds. For the range of flow conditions investigated, increase in liquid oil viscosity was observed to strongly influence the slug frequency. A comparison of the present data with prediction models available in the literature revealed huge discrepancies. A new correlation incorporating the effect of viscosity on slug frequency has been proposed for the horizontal flow, which represents the main contribution of this work.

Keywords: gamma densitometer, flow pattern, pressure gradient, slug frequency

Procedia PDF Downloads 412
19030 Iraqi Short Term Electrical Load Forecasting Based on Interval Type-2 Fuzzy Logic

Authors: Firas M. Tuaimah, Huda M. Abdul Abbas

Abstract:

Accurate Short Term Load Forecasting (STLF) is essential for a variety of decision making processes. However, forecasting accuracy can drop due to the presence of uncertainty in the operation of energy systems or unexpected behavior of exogenous variables. Interval Type 2 Fuzzy Logic System (IT2 FLS), with additional degrees of freedom, gives an excellent tool for handling uncertainties and it improved the prediction accuracy. The training data used in this study covers the period from January 1, 2012 to February 1, 2012 for winter season and the period from July 1, 2012 to August 1, 2012 for summer season. The actual load forecasting period starts from January 22, till 28, 2012 for winter model and from July 22 till 28, 2012 for summer model. The real data for Iraqi power system which belongs to the Ministry of Electricity.

Keywords: short term load forecasting, prediction interval, type 2 fuzzy logic systems, electric, computer systems engineering

Procedia PDF Downloads 397
19029 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran

Authors: Reza Zakerinejad

Abstract:

Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.

Keywords: TreeNet model, terrain analysis, Golestan Province, Iran

Procedia PDF Downloads 535
19028 Time "And" Dimension(s) - Visualizing the 4th and 4+ Dimensions

Authors: Siddharth Rana

Abstract:

As we know so far, there are 3 dimensions that we are capable of interpreting and perceiving, and there is a 4th dimension, called time, about which we don’t know much yet. We, as humans, live in the 4th dimension, not the 3rd. We travel 3 dimensionally but cannot yet travel 4 dimensionally; perhaps if we could, then visiting the past and the future would be like climbing a mountain or going down a road. So far, we humans are not even capable of imagining any higher dimensions than the three dimensions in which we can travel. We are the beings of the 4th dimension; we are the beings of time; that is why we can travel 3 dimensionally; however, if, say, there were beings of the 5th dimension, then they would easily be able to travel 4 dimensionally, i.e., they could travel in the 4th dimension as well. Beings of the 5th dimension can easily time travel. However, beings of the 4th dimension, like us, cannot time travel because we live in a 4-D world, traveling 3 dimensionally. That means to ever do time travel, we just need to go to a higher dimension and not only perceive it but also be able to travel in it. However, traveling to the past is not very possible, unlike traveling to the future. Even if traveling to the past were possible, it would be very unlikely that an event in the past would be changed. In this paper, some approaches are provided to define time, our movement in time to the future, some aspects of time travel using dimensions, and how we can perceive a higher dimension.

Keywords: time, dimensions, String theory, relativity

Procedia PDF Downloads 107