Search results for: match outcome forecasting
1846 The Strategy for Increasing the Competitiveness of Georgia
Authors: G. Erkomaishvili
Abstract:
The paper discusses economic policy of Georgia aiming to increase national competitiveness as well as the tools and means which will help to improve the competitiveness of the country. The sectors of the economy, in which the country can achieve the competitive advantage, are studied. It is noted that the country’s economic policy plays an important role in obtaining and maintaining the competitive advantage - authority should take measures to ensure high level of education; scientific and research activities should be funded by the state; foreign direct investments should be attracted mainly in science-intensive industries; adaptation with the latest scientific achievements of the modern world and deepening of scientific and technical cooperation. Stable business environment and export oriented strategy is the basis for the country’s economic growth. As the outcome of the research, the paper suggests the strategy for improving competitiveness in Georgia; recommendations are provided based on relevant conclusions.Keywords: competitive advantage, competitiveness, competitiveness improvement strategy, competitiveness of Georgia
Procedia PDF Downloads 4131845 Performance and Emissions Analysis of Diesel Engine with Bio-Diesel of Waste Cooking Oils
Authors: Mukesh Kumar, Onkar Singh, Naveen Kumar, Amar Deep
Abstract:
The waste cooking oil is taken as feedstock for biodiesel production. For this research, waste cooking oil is collected from many hotels and restaurants, and then biodiesel is prepared for experimentation purpose. The prepared biodiesel is mixed with mineral diesel in the proportion of 10%, 20%, and 30% to perform tests on a diesel engine. The experimental analysis is carried out at different load conditions to analyze the impact of the blending ratio on the performance and emission parameters. When the blending proportion of biodiesel is increased, then the highest pressure reduces due to the fall in the calorific value of the blended mixture. Experimental analysis shows a promising decrease in nitrogen oxides (NOx). A mixture of 20% biodiesel and mineral diesel is the best negotiation, mixing ratio, and beyond that, a remarkable reduction in the outcome of the performance has been observed.Keywords: alternative sources, diesel engine, emissions, performance
Procedia PDF Downloads 1791844 Software Development for Both Small Wind Performance Optimization and Structural Compliance Analysis with International Safety Regulations
Authors: K. M. Yoo, M. H. Kang
Abstract:
Conventional commercial wind turbine design software is limited to large wind turbines due to not incorporating with low Reynold’s Number aerodynamic characteristics typically for small wind turbines. To extract maximum annual energy product from an intermediately designed small wind turbine associated with measured wind data, numerous simulation is highly recommended to have a best fitting planform design with proper airfoil configuration. Since depending upon wind distribution with average wind speed, an optimal wind turbine planform design changes accordingly. It is theoretically not difficult, though, it is very inconveniently time-consuming design procedure to finalize conceptual layout of a desired small wind turbine. Thus, to help simulations easier and faster, a GUI software is developed to conveniently iterate and change airfoil types, wind data, and geometric blade data as well. With magnetic generator torque curve, peak power tracking simulation is also available to better match with the magnetic generator. Small wind turbine often lacks starting torque due to blade optimization. Thus this simulation is also embedded along with yaw design. This software provides various blade cross section details at user’s design convenience such as skin thickness control with fiber direction option, spar shape, and their material properties. Since small wind turbine is under international safety regulations with fatigue damage during normal operations and safety load analyses with ultimate excessive loads, load analyses are provided with each category mandated in the safety regulations.Keywords: GUI software, Low Reynold’s number aerodynamics, peak power tracking, safety regulations, wind turbine performance optimization
Procedia PDF Downloads 3041843 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 2151842 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India
Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.
Abstract:
Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)
Procedia PDF Downloads 1491841 Depression among Housewives and Professional Women in Karachi: A Comparative Study
Authors: Naheed Khan
Abstract:
A non-experimental study was conducted to evaluate the prevalence of anxiety and depression in middle-class women in Karachi, a metropolitan city of Pakistan. The Aga Khan University Anxiety and Depression Scale (AKUADS) was self -administered by a sample of 50 housewives and 50 professional women between the ages of 24 and 54 years. All the participants were at least graduates, married, had children, and were living in joint family systems. Results showed a 48% prevalence of anxiety and depression in housewives as compared to 34% in professional women. The data showed significant difference in mean of total scores on AKUADS and the calculated t-value of 1.957 with a df = 98 and α = 0.05. Two variables, that is, profession and a higher level of education were significantly related to the outcome. Hence acquiring higher education and taking up a job, even a part time one, may alleviate the symptoms of anxiety and depression in housewives. Other factors responsible for the relief of such symptoms, such as quality of relationship with husbands, may be investigated for both categories of women.Keywords: anxiety, depression, housewives, professional women
Procedia PDF Downloads 4161840 Vertebral Transverse Open Wedge Osteotomy in Correction of Thoracolumbar Kyphosis Resulting from Ankylosing Spondylitis
Authors: S. AliReza Mirghasemi, Amin Mohamadi, Zameer Hussain, Narges Rahimi Gabaran, Mir Mostafa Sadat, Shervin Rashidinia
Abstract:
In progressive cases of Ankylosing Spondylitis, patients will have high degrees of kyphosis leading to severe disabilities. Several operative techniques have been used in this stage, but little knowledge exists on the indications for and outcome of these methods. In this study, we examined the efficacy of monosegmental transverse open wedge osteotomy of L3 in 11 patients with progressive spinal kyphosis. The average correction was 36̊ (20 to 42) with no loss of correction after operation. The average operating time was 120 minutes (100 to 130) and the mean blood loss was 1500 ml (1100 to 2000). Osteotomy corrected all patients sufficiently to allow them to see ahead and their posture was improved. There were no fatal complications but one patient had paraplegia after the operation.Keywords: ankylosing spondylitis, thoracolumbar kyphosis, open wedge osteotomy, L3 transverse open wedge osteotomy
Procedia PDF Downloads 3931839 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships
Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang
Abstract:
In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.Keywords: ice slurry, seawater pipe, ice packing fraction, numerical simulation
Procedia PDF Downloads 3671838 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 1431837 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 811836 Energy Performance Gaps in Residences: An Analysis of the Variables That Cause Energy Gaps and Their Impact
Authors: Amrutha Kishor
Abstract:
Today, with the rising global warming and depletion of resources every industry is moving toward sustainability and energy efficiency. As part of this movement, it is nowadays obligatory for architects to play their part by creating energy predictions for their designs. But in a lot of cases, these predictions do not reflect the real quantities of energy in newly built buildings when operating. These can be described as ‘Energy Performance Gaps’. This study aims to determine the underlying reasons for these gaps. Seven houses designed by Allan Joyce Architects, UK from 1998 until 2019 were considered for this study. The data from the residents’ energy bills were cross-referenced with the predictions made with the software SefairaPro and from energy reports. Results indicated that the predictions did not match the actual energy usage. An account of how energy was used in these seven houses was made by means of personal interviews. The main factors considered in the study were occupancy patterns, heating systems and usage, lighting profile and usage, and appliances’ profile and usage. The study found that the main reasons for the creation of energy gaps were the discrepancies in occupant usage and patterns of energy consumption that are predicted as opposed to the actual ones. This study is particularly useful for energy-conscious architectural firms to fine-tune the approach to designing houses and analysing their energy performance. As the findings reveal that energy usage in homes varies based on the way residents use the space, it helps deduce the most efficient technological combinations. This information can be used to set guidelines for future policies and regulations related to energy consumption in homes. This study can also be used by the developers of simulation software to understand how architects use their product and drive improvements in its future versions.Keywords: architectural simulation, energy efficient design, energy performance gaps, environmental design
Procedia PDF Downloads 1181835 A Literature Review on the Role of Local Potential for Creative Industries
Authors: Maya Irjayanti
Abstract:
Local creativity utilization has been a strategic investment to be expanded as a creative industry due to its significant contribution to the national gross domestic product. Many developed and developing countries look toward creative industries as an agenda for the economic growth. This study aims to identify the role of local potential for creative industries from various empirical studies. The method performed in this study will involve a peer-reviewed journal articles and conference papers review addressing local potential and creative industries. The literature review analysis will include several steps: material collection, descriptive analysis, category selection, and material evaluation. Finally, the outcome expected provides a creative industries clustering based on the local potential of various nations. In addition, the finding of this study will be used as future research reference to explore a particular area with well-known aspects of local potential for creative industry products.Keywords: business, creativity, local potential, local wisdom
Procedia PDF Downloads 3851834 Effectiveness of an Unorthodox Intervention for Work-Family Interaction: A Field Experiment
Authors: Hassan Rasool
Abstract:
There is limited research in the intervention domain of work family interaction. We identified that meditation could be effective in coping work family conflict and nurturing work family facilitation across domains. We conducted pretest posttest control group field experiment on a sample of sixty employees to test the effectiveness of meditation in a financial sector organization. Empirical evidence confirms that the intervention was effective in coping work family conflict & nurturing facilitation across work & home domains. The intervention, also positively affected a known outcome (i.e. satisfaction at work and home) of work family interaction. Future research perspectives on the use of unorthodox interventions in the domain of work family interaction are also discussed.Keywords: work family interaction, meditation, satisfaction, experiment
Procedia PDF Downloads 4571833 Information Extraction for Short-Answer Question for the University of the Cordilleras
Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo
Abstract:
Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.Keywords: information extraction, short-answer question, natural language processing, application
Procedia PDF Downloads 4281832 Patient-Specific Design Optimization of Cardiovascular Grafts
Authors: Pegah Ebrahimi, Farshad Oveissi, Iman Manavi-Tehrani, Sina Naficy, David F. Fletcher, Fariba Dehghani, David S. Winlaw
Abstract:
Despite advances in modern surgery, congenital heart disease remains a medical challenge and a major cause of infant mortality. Cardiovascular prostheses are routinely used in surgical procedures to address congenital malformations, for example establishing a pathway from the right ventricle to the pulmonary arteries in pulmonary valvar atresia. Current off-the-shelf options including human and adult products have limited biocompatibility and durability, and their fixed size necessitates multiple subsequent operations to upsize the conduit to match with patients’ growth over their lifetime. Non-physiological blood flow is another major problem, reducing the longevity of these prostheses. These limitations call for better designs that take into account the hemodynamical and anatomical characteristics of different patients. We have integrated tissue engineering techniques with modern medical imaging and image processing tools along with mathematical modeling to optimize the design of cardiovascular grafts in a patient-specific manner. Computational Fluid Dynamics (CFD) analysis is done according to models constructed from each individual patient’s data. This allows for improved geometrical design and achieving better hemodynamic performance. Tissue engineering strives to provide a material that grows with the patient and mimic the durability and elasticity of the native tissue. Simulations also give insight on the performance of the tissues produced in our lab and reduce the need for costly and time-consuming methods of evaluation of the grafts. We are also developing a methodology for the fabrication of the optimized designs.Keywords: computational fluid dynamics, cardiovascular grafts, design optimization, tissue engineering
Procedia PDF Downloads 2421831 Supply Chain Optimisation through Geographical Network Modeling
Authors: Cyrillus Prabandana
Abstract:
Supply chain optimisation requires multiple factors as consideration or constraints. These factors are including but not limited to demand forecasting, raw material fulfilment, production capacity, inventory level, facilities locations, transportation means, and manpower availability. By knowing all manageable factors involved and assuming the uncertainty with pre-defined percentage factors, an integrated supply chain model could be developed to manage various business scenarios. This paper analyse the utilisation of geographical point of view to develop an integrated supply chain network model to optimise the distribution of finished product appropriately according to forecasted demand and available supply. The supply chain optimisation model shows that small change in one supply chain constraint is possible to largely impact other constraints, and the new information from the model should be able to support the decision making process. The model was focused on three areas, i.e. raw material fulfilment, production capacity and finished products transportation. To validate the model suitability, it was implemented in a project aimed to optimise the concrete supply chain in a mining location. The high level of operations complexity and involvement of multiple stakeholders in the concrete supply chain is believed to be sufficient to give the illustration of the larger scope. The implementation of this geographical supply chain network modeling resulted an optimised concrete supply chain from raw material fulfilment until finished products distribution to each customer, which indicated by lower percentage of missed concrete order fulfilment to customer.Keywords: decision making, geographical supply chain modeling, supply chain optimisation, supply chain
Procedia PDF Downloads 3461830 An Audit of Climate Change and Sustainability Teaching in Medical School
Authors: Karolina Wieczorek, Zofia Przypaśniak
Abstract:
Climate change is a rapidly growing threat to global health, and part of the responsibility to combat it lies within the healthcare sector itself, including adequate education of future medical professionals. To mitigate the consequences, the General Medical Council (GMC) has equipped medical schools with a list of outcomes regarding sustainability teaching. Students are expected to analyze the impact of the healthcare sector’s emissions on climate change. The delivery of the related teaching content is, however, often inadequate and insufficient time is devoted for exploration of the topics. Teaching curricula lack in-depth exploration of the learning objectives. This study aims to assess the extent and characteristics of climate change and sustainability subjects teaching in the curriculum of a chosen UK medical school (Barts and The London School of Medicine and Dentistry). It compares the data to the national average scores from the Climate Change and Sustainability Teaching (C.A.S.T.) in Medical Education Audit to draw conclusions about teaching on a regional level. This is a single-center audit of the timetabled sessions of teaching in the medical course. The study looked at the academic year 2020/2021 which included a review of all non-elective, core curriculum teaching materials including tutorials, lectures, written resources, and assignments in all five years of the undergraduate and graduate degrees, focusing only on mandatory teaching attended by all students (excluding elective modules). The topics covered were crosschecked with GMC Outcomes for graduates: “Educating for Sustainable Healthcare – Priority Learning Outcomes” as gold standard to look for coverage of the outcomes and gaps in teaching. Quantitative data was collected in form of time allocated for teaching as proxy of time spent per individual outcomes. The data was collected independently by two students (KW and ZP) who have received prior training and assessed two separate data sets to increase interrater reliability. In terms of coverage of learning outcomes, 12 out of 13 were taught (with the national average being 9.7). The school ranked sixth in the UK for time spent per topic and second in terms of overall coverage, meaning the school has a broad range of topics taught with some being explored in more detail than others. For the first outcome 4 out of 4 objectives covered (average 3.5) with 47 minutes spent per outcome (average 84 min), for the second objective 5 out of 5 covered (average 3.5) with 46 minutes spent (average 20), for the third 3 out of 4 (average 2.5) with 10 mins pent (average 19 min). A disproportionately large amount of time is spent delivering teaching regarding air pollution (respiratory illnesses), which resulted in the topic of sustainability in other specialties being excluded from teaching (musculoskeletal, ophthalmology, pediatrics, renal). Conclusions: Currently, there is no coherent strategy on national teaching of climate change topics and as a result an unstandardized amount of time spent on teaching and coverage of objectives can be observed.Keywords: audit, climate change, sustainability, education
Procedia PDF Downloads 861829 Forecasting Impacts on Vulnerable Shorelines: Vulnerability Assessment Along the Coastal Zone of Messologi Area - Western Greece
Authors: Evangelos Tsakalos, Maria Kazantzaki, Eleni Filippaki, Yannis Bassiakos
Abstract:
The coastal areas of the Mediterranean have been extensively affected by the transgressive event that followed the Last Glacial Maximum, with many studies conducted regarding the stratigraphic configuration of coastal sediments around the Mediterranean. The coastal zone of the Messologi area, western Greece, consists of low relief beaches containing low cliffs and eroded dunes, a fact which, in combination with the rising sea level and tectonic subsidence of the area, has led to substantial coastal. Coastal vulnerability assessment is a useful means of identifying areas of coastline that are vulnerable to impacts of climate change and coastal processes, highlighting potential problem areas. Commonly, coastal vulnerability assessment takes the form of an ‘index’ that quantifies the relative vulnerability along a coastline. Here we make use of the coastal vulnerability index (CVI) methodology by Thieler and Hammar-Klose, by considering geological features, coastal slope, relative sea-level change, shoreline erosion/accretion rates, and mean significant wave height as well as mean tide range to assess the present-day vulnerability of the coastal zone of Messologi area. In light of this, an impact assessment is performed under three different sea level rise scenarios, and adaptation measures to control climate change events are proposed. This study contributes toward coastal zone management practices in low-lying areas that have little data information, assisting decision-makers in adopting best adaptations options to overcome sea level rise impact on vulnerable areas similar to the coastal zone of Messologi.Keywords: coastal vulnerability index, coastal erosion, sea level rise, GIS
Procedia PDF Downloads 1761828 A Top-down vs a Bottom-up Approach on Lower Extremity Motor Recovery and Balance Following Acute Stroke: A Randomized Clinical Trial
Authors: Vijaya Kumar, Vidayasagar Pagilla, Abraham Joshua, Rakshith Kedambadi, Prasanna Mithra
Abstract:
Background: Post stroke rehabilitation are aimed to accelerate for optimal sensorimotor recovery, functional gain and to reduce long-term dependency. Intensive physical therapy interventions can enhance this recovery as experience-dependent neural plastic changes either directly act at cortical neural networks or at distal peripheral level (muscular components). Neuromuscular Electrical Stimulation (NMES), a traditional bottom-up approach, mirror therapy (MT), a relatively new top down approach have found to be an effective adjuvant treatment methods for lower extremity motor and functional recovery in stroke rehabilitation. However there is a scarcity of evidence to compare their therapeutic gain in stroke recovery.Aim: To compare the efficacy of neuromuscular electrical stimulation (NMES) and mirror therapy (MT) in very early phase of post stroke rehabilitation addressed to lower extremity motor recovery and balance. Design: observer blinded Randomized Clinical Trial. Setting: Neurorehabilitation Unit, Department of Physical Therapy, Tertiary Care Hospitals. Subjects: 32 acute stroke subjects with first episode of unilateral stroke with hemiparesis, referred for rehabilitation (onset < 3 weeks), Brunnstorm lower extremity recovery stages ≥3 and MMSE score more than 24 were randomized into two group [Group A-NMES and Group B-MT]. Interventions: Both the groups received eclectic approach to remediate lower extremity recovery which includes treatment components of Roods, Bobath and Motor learning approaches for 30 minutes a day for 6 days. Following which Group A (N=16) received 30 minutes of surface NMES training for six major paretic muscle groups (gluteus maximus and medius,quadriceps, hamstrings, tibialis anterior and gastrocnemius). Group B (N=16) was administered with 30 minutes of mirror therapy sessions to facilitate lower extremity motor recovery. Outcome measures: Lower extremity motor recovery, balance and activities of daily life (ADLs) were measured by Fugyl Meyer Assessment (FMA-LE), Berg Balance Scale (BBS), Barthel Index (BI) before and after intervention. Results: Pre Post analysis of either group across the time revealed statistically significant improvement (p < 0.001) for all the outcome variables for the either group. All parameters of NMES had greater change scores compared to MT group as follows: FMA-LE (25.12±3.01 vs. 23.31±2.38), BBS (35.12±4.61 vs. 34.68±5.42) and BI (40.00±10.32 vs. 37.18±7.73). Between the groups comparison of pre post values showed no significance with FMA-LE (p=0.09), BBS (p=0.80) and BI (p=0.39) respectively. Conclusion: Though either groups had significant improvement (pre to post intervention), none of them were superior to other in lower extremity motor recovery and balance among acute stroke subjects. We conclude that eclectic approach is an effective treatment irrespective of NMES or MT as an adjunct.Keywords: balance, motor recovery, mirror therapy, neuromuscular electrical stimulation, stroke
Procedia PDF Downloads 2811827 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana
Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet
Abstract:
The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems
Procedia PDF Downloads 871826 Detecting Financial Bubbles Using Gap between Common Stocks and Preferred Stocks
Authors: Changju Lee, Seungmo Ku, Sondo Kim, Woojin Chang
Abstract:
How to detecting financial bubble? Addressing this simple question has been the focus of a vast amount of empirical research spanning almost half a century. However, financial bubble is hard to observe and varying over the time; there needs to be more research on this area. In this paper, we used abnormal difference between common stocks price and those preferred stocks price to explain financial bubble. First, we proposed the ‘W-index’ which indicates spread between common stocks and those preferred stocks in stock market. Second, to prove that this ‘W-index’ is valid for measuring financial bubble, we showed that there is an inverse relationship between this ‘W-index’ and S&P500 rate of return. Specifically, our hypothesis is that when ‘W-index’ is comparably higher than other periods, financial bubbles are added up in stock market and vice versa; according to our hypothesis, if investors made long term investments when ‘W-index’ is high, they would have negative rate of return; however, if investors made long term investments when ‘W-index’ is low, they would have positive rate of return. By comparing correlation values and adjusted R-squared values of between W-index and S&P500 return, VIX index and S&P500 return, and TED index and S&P500 return, we showed only W-index has significant relationship between S&P500 rate of return. In addition, we figured out how long investors should hold their investment position regard the effect of financial bubble. Using this W-index, investors could measure financial bubble in the market and invest with low risk.Keywords: financial bubble detection, future return, forecasting, pairs trading, preferred stocks
Procedia PDF Downloads 3681825 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 3351824 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group
Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb
Abstract:
Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.Keywords: broadband services, customer experience quality, loyalty, net promoters score
Procedia PDF Downloads 2661823 Patient-Specific Modeling Algorithm for Medical Data Based on AUC
Authors: Guilherme Ribeiro, Alexandre Oliveira, Antonio Ferreira, Shyam Visweswaran, Gregory Cooper
Abstract:
Patient-specific models are instance-based learning algorithms that take advantage of the particular features of the patient case at hand to predict an outcome. We introduce two patient-specific algorithms based on decision tree paradigm that use AUC as a metric to select an attribute. We apply the patient specific algorithms to predict outcomes in several datasets, including medical datasets. Compared to the patient-specific decision path (PSDP) entropy-based and CART methods, the AUC-based patient-specific decision path models performed equivalently on area under the ROC curve (AUC). Our results provide support for patient-specific methods being a promising approach for making clinical predictions.Keywords: approach instance-based, area under the ROC curve, patient-specific decision path, clinical predictions
Procedia PDF Downloads 4781822 Modeling of Virtual Power Plant
Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.
Abstract:
Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling
Procedia PDF Downloads 621821 Automatic Detection of Traffic Stop Locations Using GPS Data
Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell
Abstract:
Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data
Procedia PDF Downloads 2751820 Outcome Evaluation of a Blended-Learning Mental Health Training Course in South African Public Health Facilities
Authors: F. Slaven, M. Uys, Y. Erasmus
Abstract:
The South African National Mental Health Education Programme (SANMHEP) was a National Department of Health (NDoH) initiative to strengthen mental health services in South Africa in collaboration with the Foundation for Professional Development (FPD), SANOFI and the various provincial departments of health. The programme was implemented against the backdrop of a number of challenges in the management of mental health in the country related to staff shortages and infrastructure, the intersection of mental health with the growing burden of non-communicable diseases and various forms of violence, and challenges around substance abuse and its relationship with mental health. The Mental Health Care Act (No. 17 of 2002) prescribes that mental health should be integrated into general health services including primary, secondary and tertiary levels to improve access to services and reduce stigma associated with mental illness. In order for the provisions of the Act to become a reality, and for the journey of mental health patients through the system to improve, sufficient and skilled health care providers are critical. SANMHEP specifically targeted Medical Doctors and Professional Nurses working within the facilities that are listed to conduct 72-hour assessments, as well as District Hospitals. The aim of the programme was to improve the clinical diagnosis and management of mental disorders/conditions and the understanding of and compliance with the Mental Health Care Act and related Regulations and Guidelines in the care, treatment and rehabilitation of mental health care users. The course used a blended-learning approach and trained 1 120 health care providers through 36 workshops between February and November 2019. Of those trained, 689 (61.52%) were Professional Nurses, 337 (30.09%) were Medical Doctors, and 91 (8.13%) indicated their occupation as ‘other’ (of these more than half were psychologists). The pre- and post-evaluation of the face-to-face training sessions indicated a marked improvement in knowledge and confidence level scores (both clinical and legislative) in the care, treatment and rehabilitation of mental health care users by participants in all the training sessions. There was a marked improvement in the knowledge and confidence of participants in performing certain mental health activities (on average the ratings increased by 2.72; or 27%) and in managing certain mental health conditions (on average the ratings increased by 2.55; or 25%). The course also required that participants obtain 70% or higher in their formal assessments as part of the online component. The 337 participants who completed and passed the course scored 90% on average. This illustrates that when participants attempted and completed the course, they did very well. To further assess the effect of the course on the knowledge and behaviour of the trained mental health care practitioners a mixed-method outcome evaluation is currently underway consisting of a survey with participants three months after completion, follow-up interviews with participants, and key informant interviews with department of health officials and course facilitators. This will enable a more detailed assessment of the impact of the training on participants' perceived ability to manage and treat mental health patients.Keywords: mental health, public health facilities, South Africa, training
Procedia PDF Downloads 1191819 Hourly Solar Radiations Predictions for Anticipatory Control of Electrically Heated Floor: Use of Online Weather Conditions Forecast
Authors: Helene Thieblemont, Fariborz Haghighat
Abstract:
Energy storage systems play a crucial role in decreasing building energy consumption during peak periods and expand the use of renewable energies in buildings. To provide a high building thermal performance, the energy storage system has to be properly controlled to insure a good energy performance while maintaining a satisfactory thermal comfort for building’s occupant. In the case of passive discharge storages, defining in advance the required amount of energy is required to avoid overheating in the building. Consequently, anticipatory supervisory control strategies have been developed forecasting future energy demand and production to coordinate systems. Anticipatory supervisory control strategies are based on some predictions, mainly of the weather forecast. However, if the forecasted hourly outdoor temperature may be found online with a high accuracy, solar radiations predictions are most of the time not available online. To estimate them, this paper proposes an advanced approach based on the forecast of weather conditions. Several methods to correlate hourly weather conditions forecast to real hourly solar radiations are compared. Results show that using weather conditions forecast allows estimating with an acceptable accuracy solar radiations of the next day. Moreover, this technique allows obtaining hourly data that may be used for building models. As a result, this solar radiation prediction model may help to implement model-based controller as Model Predictive Control.Keywords: anticipatory control, model predictive control, solar radiation forecast, thermal storage
Procedia PDF Downloads 2711818 Applications of Out-of-Sequence Thrust Movement for Earthquake Mitigation: A Review
Authors: Rajkumar Ghosh
Abstract:
The study presents an overview of the many uses and approaches for estimating out-of-sequence thrust movement in earthquake mitigation. The study investigates how knowing and forecasting thrust movement during seismic occurrences might assist to effective earthquake mitigation measures. The review begins by discussing out-of-sequence thrust movement and its importance in earthquake mitigation strategies. It explores how typical techniques of estimating thrust movement may not capture the full complexity of seismic occurrences and emphasizes the benefits of include out-of-sequence data in the analysis. A thorough review of existing research and studies on out-of-sequence thrust movement estimates for earthquake mitigation. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources such as GPS measurements, satellite imagery, and seismic recordings. The study also examines the use of out-of-sequence thrust movement estimates in earthquake mitigation measures. It investigates how precise calculation of thrust movement may help improve structural design, analyse infrastructure risk, and develop early warning systems. The potential advantages of using out-of-sequence data in these applications to improve the efficiency of earthquake mitigation techniques. The difficulties and limits of estimating out-of-sequence thrust movement for earthquake mitigation. It addresses data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and increase the accuracy and reliability of out-of-sequence thrust movement estimates, the authors recommend topics for additional study and improvement. The study is a helpful resource for seismic monitoring and earthquake risk assessment researchers, engineers, and policymakers, supporting innovations in earthquake mitigation measures based on a better knowledge of thrust movement dynamics.Keywords: earthquake mitigation, out-of-sequence thrust, satellite imagery, seismic recordings, GPS measurements
Procedia PDF Downloads 841817 Exchange Rate Variation and Balance of Payments: The Nigerian Experience (1970-2012)
Authors: Vitus Onyebuchim Onyemailu, Olive Obianuju Okalibe
Abstract:
The study tried to examine relationship between exchange rate variations on the balance of payments in Nigeria from 1970 to 2012. Using time series on econometric measures such as Granger causality and ordinary least square (OLS), the study found that exchange rate movements especially the depreciation of naira has not contributed significantly on the balance of payments under the year of the study. The granger result conform the Marshall-Lerner short and long run prepositions that exchange rate devaluation enhances balance of payments. On disaggregation exchange rate granger causes current and capital account balances give the Nigeria data from 1970 to 2012. Overall in the long run OLS regression analysis, exchange rate on semi log functional form, exchange rate variation did not record significant effect on balance of payment equation. This height was also maintained in the current or trade balance which does not match the Marshall-Lerner. The capital account balance in reverse reported a significant impact of exchange rate variability on the capital account balance. Finally, on exchange rate determination equation, where many fundamentals were considered including lagged of exchange rate. Thus, the lagged of exchange rate recorded a positive and significant influence on the present exchange rate. This means that players in the financial markets usually out plays authority’s policy’s stances through their speculative tendencies. The work therefore, recommend that effort should be made by the authorities to providing enabling environment for production of goods and services to triumph in order to take advantages of steady devaluation of its currency. This is done by providing infrastructure, provision of science and technology. Thus, when this is done Nigeria would be able to have competitive power against the rest of the world.Keywords: exchange rate variation, balance of payments, current account, capital account, Marshall-Lerner hypothesis
Procedia PDF Downloads 397