Search results for: location quotients model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18522

Search results for: location quotients model

17172 Interoperability Maturity Models for Consideration When Using School Management Systems in South Africa: A Scoping Review

Authors: Keneilwe Maremi, Marlien Herselman, Adele Botha

Abstract:

The main purpose and focus of this paper are to determine the Interoperability Maturity Models to consider when using School Management Systems (SMS). The importance of this is to inform and help schools with knowing which Interoperability Maturity Model is best suited for their SMS. To address the purpose, this paper will apply a scoping review to ensure that all aspects are provided. The scoping review will include papers written from 2012-2019 and a comparison of the different types of Interoperability Maturity Models will be discussed in detail, which includes the background information, the levels of interoperability, and area for consideration in each Maturity Model. The literature was obtained from the following databases: IEEE Xplore and Scopus, the following search engines were used: Harzings, and Google Scholar. The topic of the paper was used as a search term for the literature and the term ‘Interoperability Maturity Models’ was used as a keyword. The data were analyzed in terms of the definition of Interoperability, Interoperability Maturity Models, and levels of interoperability. The results provide a table that shows the focus area of concern for each Maturity Model (based on the scoping review where only 24 papers were found to be best suited for the paper out of 740 publications initially identified in the field). This resulted in the most discussed Interoperability Maturity Model for consideration (Information Systems Interoperability Maturity Model (ISIMM) and Organizational Interoperability Maturity Model for C2 (OIM)).

Keywords: interoperability, interoperability maturity model, school management system, scoping review

Procedia PDF Downloads 209
17171 Stability Bound of Ruin Probability in a Reduced Two-Dimensional Risk Model

Authors: Zina Benouaret, Djamil Aissani

Abstract:

In this work, we introduce the qualitative and quantitative concept of the strong stability method in the risk process modeling two lines of business of the same insurance company or an insurance and re-insurance companies that divide between them both claims and premiums with a certain proportion. The approach proposed is based on the identification of the ruin probability associate to the model considered, with a stationary distribution of a Markov random process called a reversed process. Our objective, after clarifying the condition and the perturbation domain of parameters, is to obtain the stability inequality of the ruin probability which is applied to estimate the approximation error of a model with disturbance parameters by the considered model. In the stability bound obtained, all constants are explicitly written.

Keywords: Markov chain, risk models, ruin probabilities, strong stability analysis

Procedia PDF Downloads 250
17170 Co-integration for Soft Commodities with Non-Constant Volatility

Authors: E. Channol, O. Collet, N. Kostyuchyk, T. Mesbah, Quoc Hoang Long Nguyen

Abstract:

In this paper, a pricing model is proposed for co-integrated commodities extending Larsson model. The futures formulae have been derived and tests have been performed with non-constant volatility. The model has been applied to energy commodities (gas, CO2, energy) and soft commodities (corn, wheat). Results show that non-constant volatility leads to more accurate short term prices, which provides better evaluation of value-at-risk and more generally improve the risk management.

Keywords: co-integration, soft commodities, risk management, value-at-risk

Procedia PDF Downloads 548
17169 Modeling Sustainable Truck Rental Operations Using Closed-Loop Supply Chain Network

Authors: Khaled S. Abdallah, Abdel-Aziz M. Mohamed

Abstract:

Moving industries consume numerous resources and dispose masses of used packaging materials. Proper sorting, recycling and disposing the packaging materials is necessary to avoid a sever pollution disaster. This research paper presents a conceptual model to propose sustainable truck rental operations instead of the regular one. An optimization model was developed to select the locations of truck rental centers, collection sites, maintenance and repair sites, and identify the rental fees to be charged for all routes that maximize the total closed supply chain profits. Fixed costs of vehicle purchasing, costs of constructing collection centers and repair centers, as well as the fixed costs paid to use disposal and recycling centers are considered. Operating costs include the truck maintenance, repair costs as well as the cost of recycling and disposing the packing materials, and the costs of relocating the truck are presented in the model. A mixed integer model is developed followed by a simulation model to examine the factors affecting the operation of the model.

Keywords: modeling, truck rental, supply chains management.

Procedia PDF Downloads 228
17168 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 125
17167 Evaluation of Biochemical Oxygen Demand and Dissolved Oxygen for Thames River by Using Stream Water Quality Model

Authors: Ghassan Al-Dulaimi

Abstract:

This paper studied the biochemical parameter (BOD5) and (DO) for the Thames River (Canada-Ontario). Water samples have been collected from Thames River along different points between Chatham to Woodstock and were analysed for various water quality parameters during the low flow season (April). The study involves the application of the stream water quality model QUAL2K model to simulate and predict the dissolved oxygen (DO) and biochemical oxygen demand (BOD5) profiles for Thames River in a stretch of 251 kilometers. The model output showed that DO in the entire river was within the limit of not less than 4 mg/L. For Carbonaceous Biochemical Oxygen Demand CBOD, the entire river may be divided into two main reaches; the first one is extended from Chatham City (0 km) to London (150 km) and has a CBOD concentration of 2 mg/L, and the second reach has CBOD range (2–4) mg/L in which begins from London city and extend to near Woodstock city (73km).

Keywords: biochemical oxygen demand, dissolved oxygen, Thames river, QUAL2K model

Procedia PDF Downloads 95
17166 Current of Drain for Various Values of Mobility in the Gaas Mesfet

Authors: S. Belhour, A. K. Ferouani, C. Azizi

Abstract:

In recent years, a considerable effort (experience, numerical simulation, and theoretical prediction models) has characterised by high efficiency and low cost. Then an improved physics analytical model for simulating is proposed. The performance of GaAs MESFETs has been developed for use in device design for high frequency. This model is based on mathematical analysis, and a new approach for the standard model is proposed, this approach allowed to conceive applicable model for MESFET’s operating in the turn-one or pinch-off region and valid for the short-channel and the long channel MESFET’s in which the two dimensional potential distribution contributed by the depletion layer under the gate is obtained by conventional approximation. More ever, comparisons between the analytical models with different values of mobility are proposed, and a good agreement is obtained.

Keywords: analytical, gallium arsenide, MESFET, mobility, models

Procedia PDF Downloads 76
17165 Diabetes Diagnosis Model Using Rough Set and K- Nearest Neighbor Classifier

Authors: Usiobaifo Agharese Rosemary, Osaseri Roseline Oghogho

Abstract:

Diabetes is a complex group of disease with a variety of causes; it is a disorder of the body metabolism in the digestion of carbohydrates food. The application of machine learning in the field of medical diagnosis has been the focus of many researchers and the use of recognition and classification model as a decision support tools has help the medical expert in diagnosis of diseases. Considering the large volume of medical data which require special techniques, experience, and high diagnostic skill in the diagnosis of diseases, the application of an artificial intelligent system to assist medical personnel in order to enhance their efficiency and accuracy in diagnosis will be an invaluable tool. In this study will propose a diabetes diagnosis model using rough set and K-nearest Neighbor classifier algorithm. The system consists of two modules: the feature extraction module and predictor module, rough data set is used to preprocess the attributes while K-nearest neighbor classifier is used to classify the given data. The dataset used for this model was taken for University of Benin Teaching Hospital (UBTH) database. Half of the data was used in the training while the other half was used in testing the system. The proposed model was able to achieve over 80% accuracy.

Keywords: classifier algorithm, diabetes, diagnostic model, machine learning

Procedia PDF Downloads 337
17164 Identification of Accumulated Hydrocarbon Based on Heat Propagation Analysis in Order to Develop Mature Field: Case Study in South Sumatra Basin, Indonesia

Authors: Kukuh Suprayogi, Muhamad Natsir, Olif Kurniawan, Hot Parulian, Bayu Fitriana, Fery Mustofa

Abstract:

The new approach by utilizing the heat propagation analysis carried out by studying and evaluating the effect of the presence of hydrocarbons to the flow of heat that goes from the bottom surface to surface. Heat propagation is determined by the thermal conductivity of rocks. The thermal conductivity of rock itself is a quantity that describes the ability of a rock to deliver heat. This quantity depends on the constituent rock lithology, large porosity, and pore fluid filler. The higher the thermal conductivity of a rock, the more easily the flow of heat passing through these rocks. With the same sense, the heat flow will more easily pass through the rock when the rock is filled with water than hydrocarbons, given the nature of the hydrocarbons having more insulator against heat. The main objective of this research is to try to make the model the heat propagation calculations in degrees Celsius from the subsurface to the surface which is then compared with the surface temperature is measured directly at the point of location. In calculating the propagation of heat, we need to first determine the thermal conductivity of rocks, where the rocks at the point calculation are not composed of homogeneous but consist of strata. Therefore, we need to determine the mineral constituent and porosity values of each stratum. As for the parameters of pore fluid filler, we assume that all the pores filled with water. Once we get a thermal conductivity value of each unit of the rock, then we begin to model the propagation of heat profile from the bottom to the surface. The initial value of the temperature that we use comes from the data bottom hole temperature (BHT) is obtained from drilling results. Results of calculations per depths the temperature is displayed in plotting temperature versus depth profiles that describe the propagation of heat from the bottom of the well to the surface, note that pore fluid is water. In the technical implementation, we can identify the magnitude of the effect of hydrocarbons in reducing the amount of heat that crept to the surface based on the calculation of propagation of heat at a certain point and compared with measurements of surface temperature at that point, assuming that the surface temperature measured is the temperature that comes from the asthenosphere. This publication proves that the accumulation of hydrocarbon can be identified by analysis of heat propagation profile which could be a method for identifying the presence of hydrocarbons.

Keywords: thermal conductivity, rock, pore fluid, heat propagation

Procedia PDF Downloads 109
17163 Validation Study of Radial Aircraft Engine Model

Authors: Lukasz Grabowski, Tytus Tulwin, Michal Geca, P. Karpinski

Abstract:

This paper presents the radial aircraft engine model which has been created in AVL Boost software. This model is a one-dimensional physical model of the engine, which enables us to investigate the impact of an ignition system design on engine performance (power, torque, fuel consumption). In addition, this model allows research under variable environmental conditions to reflect varied flight conditions (altitude, humidity, cruising speed). Before the simulation research the identifying parameters and validating of model were studied. In order to verify the feasibility to take off power of gasoline radial aircraft engine model, some validation study was carried out. The first stage of the identification was completed with reference to the technical documentation provided by manufacturer of engine and the experiments on the test stand of the real engine. The second stage involved a comparison of simulation results with the results of the engine stand tests performed on a WSK ’PZL-Kalisz’. The engine was loaded by a propeller in a special test bench. Identifying the model parameters referred to a comparison of the test results to the simulation in terms of: pressure behind the throttles, pressure in the inlet pipe, and time course for pressure in the first inlet pipe, power, and specific fuel consumption. Accordingly, the required coefficients and error of simulation calculation relative to the real-object experiments were determined. Obtained the time course for pressure and its value is compatible with the experimental results. Additionally the engine power and specific fuel consumption tends to be significantly compatible with the bench tests. The mapping error does not exceed 1.5%, which verifies positively the model of combustion and allows us to predict engine performance if the process of combustion will be modified. The next conducted tests verified completely model. The maximum mapping error for the pressure behind the throttles and the inlet pipe pressure is 4 %, which proves the model of the inlet duct in the engine with the charging compressor to be correct.

Keywords: 1D-model, aircraft engine, performance, validation

Procedia PDF Downloads 338
17162 Evaluation of Low-Reducible Sinter in Blast Furnace Technology by Mathematical Model Developed at Centre ENET, VSB: Technical University of Ostrava

Authors: S. Jursová, P. Pustějovská, S. Brožová, J. Bilík

Abstract:

The paper deals with possibilities of interpretation of iron ore reducibility tests. It presents a mathematical model developed at Centre ENET, VŠB–Technical University of Ostrava, Czech Republic for an evaluation of metallurgical material of blast furnace feedstock such as iron ore, sinter or pellets. According to the data from the test, the model predicts its usage in blast furnace technology and its effects on production parameters of shaft aggregate. At the beginning, the paper sums up the general concept and experience in mathematical modelling of iron ore reduction. It presents basic equation for the calculation and the main parts of the developed model. In the experimental part, there is an example of usage of the mathematical model. The paper describes the usage of data for some predictive calculation. There are presented material, method of carried test of iron ore reducibility. Then there are graphically interpreted effects of used material on carbon consumption, rate of direct reduction and the whole reduction process.

Keywords: blast furnace technology, iron ore reduction, mathematical model, prediction of iron ore reduction

Procedia PDF Downloads 675
17161 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 77
17160 A Model for Operating Rooms Scheduling

Authors: Jose Francisco Ferreira Ribeiro, Alexandre Bevilacqua Leoneti, Andre Lucirton Costa

Abstract:

This paper presents a mathematical model in binary variables 0/1 to make the assignment of surgical procedures to the operating rooms in a hospital. The proposed mathematical model is based on the generalized assignment problem, which maximizes the sum of preferences for the use of the operating rooms by doctors, respecting the time available in each room. The corresponding program was written in Visual Basic of Microsoft Excel, and tested to schedule surgeries at St. Lydia Hospital in Ribeirao Preto, Brazil.

Keywords: generalized assignment problem, logistics, optimization, scheduling

Procedia PDF Downloads 294
17159 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 44
17158 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting

Authors: Abhijeet Ostawal, Parmjit Lall

Abstract:

The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.

Keywords: model run time, demand model, parallelisation, python scripting

Procedia PDF Downloads 121
17157 Detection of Change Points in Earthquakes Data: A Bayesian Approach

Authors: F. A. Al-Awadhi, D. Al-Hulail

Abstract:

In this study, we applied the Bayesian hierarchical model to detect single and multiple change points for daily earthquake body wave magnitude. The change point analysis is used in both backward (off-line) and forward (on-line) statistical research. In this study, it is used with the backward approach. Different types of change parameters are considered (mean, variance or both). The posterior model and the conditional distributions for single and multiple change points are derived and implemented using BUGS software. The model is applicable for any set of data. The sensitivity of the model is tested using different prior and likelihood functions. Using Mb data, we concluded that during January 2002 and December 2003, three changes occurred in the mean magnitude of Mb in Kuwait and its vicinity.

Keywords: multiple change points, Markov Chain Monte Carlo, earthquake magnitude, hierarchical Bayesian mode

Procedia PDF Downloads 459
17156 Green It-Outsourcing Assurance Model for It-Outsourcing Vendors

Authors: Siffat Ullah Khan, Rahmat Ullah Khan, Rafiq Ahmad Khan, Habibullah Khan

Abstract:

Green IT or green computing has emerged as a fast growing business paradigm in recent years in order to develop energy-efficient Software and peripheral devices. With the constant evolution of technology and the world critical environmental status, all private and public information technology (IT) businesses are moving towards sustainability. We identified, through systematic literature review and questionnaire survey, 9 motivators, in total, faced by vendors in IT-Outsourcing relationship. Amongst these motivators 7 were ranked as critical motivators. We also identified 21, in total, practices for addressing these critical motivators. Based on these inputs we have developed Green IT-Outsourcing Assurance Model (GITAM) for IT-Outsourcing vendors. The model comprises four different levels. i.e. Initial, White, Green and Grey. Each level comprises different critical motivators and their relevant practices. We conclude that our model, GITAM, will assist IT-Outsourcing vendors in gauging their level in order to manage IT-Outsourcing activities in a green and sustainable fashion to assist the environment and to reduce the carbon emission. The model will assist vendors in improving their current level by suggesting various practices. The model will contribute to the body of knowledge in the field of Green IT.

Keywords: Green IT-outsourcing Assurance Model (GITAM), Systematic Literature Review, Empirical Study, Case Study

Procedia PDF Downloads 255
17155 The Investigation of Oil Price Shocks by Using a Dynamic Stochastic General Equilibrium: The Case of Iran

Authors: Bahram Fathi, Karim Alizadeh, Azam Mohammadbagheri

Abstract:

The aim of this paper is to investigate the role of oil price shocks in explaining business cycles in Iran using a dynamic stochastic general equilibrium approach. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. The model with two shocks produces different values for volatility, but these values have the same ranking as that of the actual data for most variables. In addition, the actual data are close to the ratio of standard deviations to the output obtained from the model with two shocks. The results indicate that productivity shocks are relatively more important to business cycles than the oil shocks. The model with only a productivity shock produces the most similar figures in term of volatility magnitude to that of the actual data. Next, we use the Impulse Response Functions (IRF) to evaluate the capability of the model. The IRF shows no effect of an oil shock on the capital stocks and on labor hours, which is a feature of the model. When the log-linearized system of equations is solved numerically, investment and labor hours were not found to be functions of the oil shock. This research recommends using different techniques to compare the model’s robustness. One method by which to do this is to have all decision variables as a function of the oil shock by inducing the stationary to the model differently. Another method is to impose a bond adjustment cost. This study intends to fill that gap. To achieve this objective, we derive a DSGE model that allows for the world oil price and productivity shocks. Second, we calibrate the model to the Iran economy. Next, we compare the moments from the theoretical model with both single and multiple shocks with that obtained from the actual data to see the extent to which business cycles in Iran can be explained by total oil revenue shock. Then, we use an impulse response function to evaluate the role of world oil price shocks. Finally, I present implications of the findings and interpretations in accordance with economic theory.

Keywords: oil price, shocks, dynamic stochastic general equilibrium, Iran

Procedia PDF Downloads 440
17154 [Keynote Talk]: The Challenges and Solutions for Developing Mobile Apps in a Small University

Authors: Greg Turner, Bin Lu, Cheer-Sun Yang

Abstract:

As computing technology advances, smartphone applications can assist in student learning in a pervasive way. For example, the idea of using a mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. In the past, some researches study the mobile software Mobile Application Software Development Life Cycle (MADLC) including traditional models such as the waterfall model, or more recent Agile Methods. Others study the issues related to the software development process. Very little research is on the development of three heterogenous mobile systems simultaneously in a small university where the availability of developers is an issue. In this paper, we propose to use a hybride model of Waterfall Model and the Agile Model, known as the Relay Race Methodology (RRM) in practice, to reflect the concept of racing and relaying for scheduling. Based on the development project, we observe that the modeling of the transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the MADLC. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future work are presented.

Keywords: agile methods, mobile apps, software process model, waterfall model

Procedia PDF Downloads 409
17153 Hemispheric Locus and Gender Predict the Delay between the Moment of Stroke and Hospitalization

Authors: D. Anderlini, G. Wallis

Abstract:

Background: The number of people experiencing stroke is steadily increasing due to changes in diet and lifestyle, to longer life expectancy resulting in older population, to higher survival rates as a consequence of improvements during the acute phase. This study considers what risk factors might contribute to delayed entry to hospital for treatment. Methods: We analyzed data from 2472 patients admitted to the Stroke Unit of the Royal Brisbane Women's Hospital, Australia, between 2002 to 2011. Results: Previous studies have reported that factors which can contribute to delay include the patient’s age, the time of day, physical location, visit the GP instead of going to the emergency, means of transport, severity of symptoms and type of stroke. Contrary to findings of other studies, we found a strong correlation between side of lesion and delay in admission: patients with right hemisphere lesions had an average delay of 3.78 days, while patients with left hemisphere lesions had an average delay of 1.49 days. Damage to the right hemisphere generally ends in motor impairment in the non-dominant hand and no speech impediment. In contrast, left hemisphere lesions can result in deficit to; dominant hand function and aphasia which will be noticed even if their impact on performance is relatively minor. A finding which goes against many previous studies, is the fact that women get to the hospital much sooner than men, showing an average delay of 0.92 days in women vs. 3.36 days in men. Conclusion: Acute surgical-pharmacological therapies are most effective if applied immediately after stroke. Hence delays to admission can be crucial to the degree of recovery. The tendency of patients to overlook symptoms of right hemisphere lesion should be the target of information campaigns both for the general public and GPs. Why do men go to hospital so late? We don't know yet! Nevertheless an awareness plan specifically direct to male population should be on the agenda of Health Departments.

Keywords: gender, admission delay, stroke location, bioinformatics, biomedicine

Procedia PDF Downloads 231
17152 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: Navid Mirzaei Varzeghani, Mahmoud Saffarzadeh, Ali Naderan, Amirhossein Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, a range of ages more significant than 55, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: multimodal transportation, demand modeling, travel behavior, statistical models

Procedia PDF Downloads 175
17151 Physical Theory for One-Dimensional Correlated Electron Systems

Authors: Nelson Nenuwe

Abstract:

The behavior of interacting electrons in one dimension was studied by calculating correlation functions and critical exponents at zero and external magnetic fields for arbitrary band filling. The technique employed in this study is based on the conformal field theory (CFT). The charge and spin degrees of freedom are separated, and described by two independent conformal theories. A detailed comparison of the t-J model with the repulsive Hubbard model was then undertaken with emphasis on their Tomonaga-Luttinger (TL) liquid properties. Near half-filling the exponents of the t-J model take the values of the strong-correlation limit of the Hubbard model, and in the low-density limit the exponents are those of a non-interacting system. The critical exponents obtained in this study belong to the repulsive TL liquid (conducting phase) and attractive TL liquid (superconducting phase). The theoretical results from this study find applications in one-dimensional organic conductors (TTF-TCNQ), organic superconductors (Bechgaard salts) and carbon nanotubes (SWCNTs, DWCNTs and MWCNTs). For instance, the critical exponent at from this study is consistent with the experimental result from optical and photoemission evidence of TL liquid in one-dimensional metallic Bechgaard salt- (TMTSF)2PF6.

Keywords: critical exponents, conformal field theory, Hubbard model, t-J model

Procedia PDF Downloads 345
17150 Modal Analysis of Small Frames using High Order Timoshenko Beams

Authors: Chadi Azoury, Assad Kallassy, Pierre Rahme

Abstract:

In this paper, we consider the modal analysis of small frames. Firstly, we construct the 3D model using H8 elements and find the natural frequencies of the frame focusing our attention on the modes in the XY plane. Secondly, we construct the 2D model (plane stress model) using Q4 elements. We concluded that the results of both models are very close to each other’s. Then we formulate the stiffness matrix and the mass matrix of the 3-noded Timoshenko beam that is well suited for thick and short beams like in our case. Finally, we model the corners where the horizontal and vertical bar meet with a special matrix. The results of our new model (3-noded Timoshenko beam for the horizontal and vertical bars and a special element for the corners based on the Q4 elements) are very satisfying when performing the modal analysis.

Keywords: corner element, high-order Timoshenko beam, Guyan reduction, modal analysis of frames, rigid link, shear locking, and short beams

Procedia PDF Downloads 321
17149 Assessment of Soil Erosion Risk Using Soil and Water Assessment Tools Model: Case of Siliana Watershed, Northwest Tunisia

Authors: Sana Dridi, Jalel Aouissi, Rafla Attia, Taoufik Hermassi, Thouraya Sahli

Abstract:

Soil erosion is an increasing issue in Mediterranean countries. In Tunisia, the capacity of dam reservoirs continues to decrease as a consequence of soil erosion. This study aims to predict sediment yield to enrich soil management practices using Soil and Water Assessment Tools model (SWAT) in the Siliana watershed (1041.6 km²), located in the northwest of Tunisia. A database was constructed using remote sensing and Geographical Information System. Climatic and flow data were collected from water resources directorates in Tunisia. The SWAT model was built to simulate hydrological processes and sediment transport. A sensitivity analysis, calibration, and validation were performed using SWAT-CUP software. The model calibration of stream flow simulations shows a good performance with NSE and R² values of 0.77 and 0.79, respectively. The model validation shows a very good performance with values of NSE and R² for 0.8 and 0.88, respectively. After calibration and validation of stream flow simulation, the model was used to simulate the soil erosion and sediment load transport. The spatial distributions of soil loss rate for determining the critical sediment source areas show that 63 % of the study area has a low soil loss rate less than 7 t ha⁻¹y⁻¹. The annual average soil loss rate simulated with the SWAT model in the Siliana watershed is 4.62 t ha⁻¹y⁻¹.

Keywords: water erosion, SWAT model, streamflow, SWATCUP, sediment yield

Procedia PDF Downloads 104
17148 The Effect of Culture and Managerial Practices on Organizational Leadership Towards Performance

Authors: Anyia Nduka, Aslan Bin Amad Senin, Ayu Azrin Bte Abdul Aziz

Abstract:

A management practice characterised by a value chain as its relatively flexible culture is replacing the old bureaucratic model of organisational practice that was built on dominance. Using a management practice fruition paradigm, the study delves into the implications of organisational culture and leadership. Developing a theory of leadership called the “cultural model” of organisational leadership by explaining how the shift from bureaucracy to management practises altered the roles and interactions of leaders. This model is well-grounded in leadership theory, considering the concept's adaptability to different leadership ideologies. In organisations where operational procedures and borders are not clearly defined, hierarchies are flattened, and work collaborations are sometimes based on contracts rather than employment. This cultural model of organizational leadership is intended to be a useful tool for predicting how effectively a leader will perform.

Keywords: leadership, organizational culture, management practices, efficiency

Procedia PDF Downloads 86
17147 'Call Drop': A Problem for Handover Minimizing the Call Drop Probability Using Analytical and Statistical Method

Authors: Anshul Gupta, T. Shankar

Abstract:

In this paper, we had analyzed the call drop to provide a good quality of service to user. By optimizing it we can increase the coverage area and also the reduction of interference and congestion created in a network. Basically handover is the transfer of call from one cell site to another site during a call. Here we have analyzed the whole network by two method-statistic model and analytic model. In statistic model we have collected all the data of a network during busy hour and normal 24 hours and in analytic model we have the equation through which we have to find the call drop probability. By avoiding unnecessary handovers we can increase the number of calls per hour. The most important parameter is co-efficient of variation on which the whole paper discussed.

Keywords: coefficient of variation, mean, standard deviation, call drop probability, handover

Procedia PDF Downloads 491
17146 Serious Game for Learning: A Model for Efficient Game Development

Authors: Zahara Abdulhussan Al-Awadai

Abstract:

In recent years, serious games have started to gain an increasing interest as a tool to support learning across different educational and training fields. It began to serve as a powerful educational tool for improving learning outcomes. In this research, we discuss the potential of virtual experiences and games research outside of the games industry and explore the multifaceted impact of serious games and related technologies on various aspects of our lives. We highlight the usage of serious games as a tool to improve education and other applications with a purpose beyond the entertainment industry. One of the main contributions of this research is proposing a model that facilitates the design and development of serious games in a flexible and easy-to-use way. This is achieved by exploring different requirements to develop a model that describes a serious game structure with a focus on both aspects of serious games (educational and entertainment aspects).

Keywords: game development, requirements, serious games, serious game model

Procedia PDF Downloads 62
17145 Computational Models for Accurate Estimation of Joint Forces

Authors: Ibrahim Elnour Abdelrahman Eltayeb

Abstract:

Computational modelling is a method used to investigate joint forces during a movement. It can get high accuracy in the joint forces via subject-specific models. However, the construction of subject-specific models remains time-consuming and expensive. The purpose of this paper was to identify what alterations we can make to generic computational models to get a better estimation of the joint forces. It appraised the impact of these alterations on the accuracy of the estimated joint forces. It found different strategies of alterations: joint model, muscle model, and an optimisation problem. All these alterations affected joint contact force accuracy, so showing the potential for improving the model predictions without involving costly and time-consuming medical images.

Keywords: joint force, joint model, optimisation problem, validation

Procedia PDF Downloads 171
17144 Investigating the Effects of Thermal and Surface Energy on the Two-Dimensional Flow Characteristics of Oil in Water Mixture between Two Parallel Plates: A Lattice Boltzmann Method Study

Authors: W. Hasan, H. Farhat

Abstract:

A hybrid quasi-steady thermal lattice Boltzmann model was used to study the combined effects of temperature and contact angle on the movement of slugs and droplets of oil in water (O/W) system flowing between two parallel plates. The model static contact angle due to the deposition of the O/W droplet on a flat surface with simulated hydrophilic characteristic at different fluid temperatures, matched very well the proposed theoretical calculation. Furthermore, the model was used to simulate the dynamic behavior of droplets and slugs deposited on the domain’s upper and lower surfaces, while subjected to parabolic flow conditions. The model accurately simulated the contact angle hysteresis for the dynamic droplets cases. It was also shown that at elevated temperatures the required power to transport the mixture diminished remarkably.

Keywords: lattice Boltzmann method, Gunstensen model, thermal, contact angle, high viscosity ratio

Procedia PDF Downloads 373
17143 Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

Authors: Eric Huang, Coleman DeLude, Justin Romberg, Saibal Mukhopadhyay, Madhavan Swaminathan

Abstract:

High performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from electromagnetic (EM) simulations is often times cumbersome leading to large storage requirement. This paper proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. This paper solves this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Keywords: RADAR, RCS, high performance computing, point scatterer model

Procedia PDF Downloads 192