Search results for: sequential confidence estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3397

Search results for: sequential confidence estimation

187 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India

Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma

Abstract:

Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.

Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation

Procedia PDF Downloads 143
186 Digitization and Economic Growth in Africa: The Role of Financial Sector Development

Authors: Abdul Ganiyu Iddrisu, Bei Chen

Abstract:

Digitization is the process of transforming analog material into digital form, especially for storage and use in a computer. Significant development of information and communication technology (ICT) over the past years has encouraged many researchers to investigate its contribution to promoting economic growth and reducing poverty. Yet the compelling empirical evidence on the effects of digitization on economic growth remains weak, particularly in Africa. This is because extant studies that explicitly evaluate digitization and economic growth nexus are mostly reports and desk reviews. This points out an empirical knowledge gap in the literature. Hypothetically, digitization influences financial sector development which in turn influences economic growth. Digitization has changed the financial sector and its operating environment. Obstacles to access to financing, for instance, physical distance, minimum balance requirements, and low-income flows, among others can be circumvented. Savings have increased, micro-savers have opened bank accounts, and banks are now able to price short-term loans. This has the potential to develop the financial sector. However, empirical evidence on the digitization-financial development nexus is dearth. On the other hand, a number of studies maintained that financial sector development greatly influences growth of economies. We, therefore, argue that financial sector development is one of the transmission mechanisms through which digitization affects economic growth. Employing macro-country-level data from African countries and using fixed effects, random effects and Hausman-Taylor estimation approaches, this paper contributes to the literature by analysing economic growth in Africa, focusing on the role of digitization and financial sector development. First, we assess how digitization influences financial sector development in Africa. From an economic policy perspective, it is important to identify digitization determinants of financial sector development so that action can be taken to reduce the economic shocks associated with financial sector distortions. This nexus is rarely examined empirically in the literature. Secondly, we examine the effect of domestic credit to the private sector and stock market capitalization as a percentage of GDP as used to proxy for financial sector development on economic growth. Digitization is represented by the volume of digital/ICT equipment imported and GDP growth is used to proxy economic growth. Finally, we examine the effect of digitization on economic growth in the light of financial sector development. The following key results were found; first, digitalization propels financial sector development in Africa. Second, financial sector development enhances economic growth. Finally, contrary to our expectation, the results also indicate that digitalization conditioned on financial sector development tends to reduce economic growth in Africa. However, results of the net effects suggest that digitalization, overall, improve economic growth in Africa. We, therefore, conclude that, digitalization in Africa does not only develop the financial sector but unconditionally contributes the growth of the continent’s economies.

Keywords: digitalization, financial sector development, Africa, economic growth

Procedia PDF Downloads 140
185 The Sense of Recognition of Muslim Women in Western Academia

Authors: Naima Mohammadi

Abstract:

The present paper critically reports on the emergency of Iranian international students in a large public university in Italy. Although the most sizeable diaspora of Iranians dates back to the 1979 revolution, a huge wave of Iranian female students travelled abroad after the Iranian Green Movement (2009) due to the intensification of gender discrimination and Islamization. To explore the experience of Iranian female students at an Italian public university, two complementary methods were adopted: a focus group and individual interviews. Focus groups yield detailed collective conversations and provide researchers with an opportunity to observe the interaction between participants, rather than between participant and researcher, which generates data. Semi-structured interviews allow participants to share their stories in their own words and speak about personal experiences and opinions. Research participants were invited to participate through a public call in a Telegram group of Iranian students. Theoretical and purposive sampling was applied to select participants. All participants were assured that full anonymity would be ensured and they consented to take part in the research. A two-hour focus group was held in English with participants in the presence and some online. They were asked to share their motivations for studying in Italy and talk about their experiences both within and outside the university context. Each of these interviews lasted from 45 to 60 minutes and was mostly carried out online and in Farsi. The focus group consisted of 8 Iranian female post-graduate students. In analyzing the data a blended approach was adopted, with a combination of deductive and inductive coding. According to research findings, although 9/11 was the beginning of the West’s challenges against Muslims, the nuclear threats of Islamic regimes promoted the toughest international sanctions against Iranians as a nation across the world. Accordingly, carrying an Iranian identity contributes to social, political, and economic exclusion. Research findings show that geopolitical factors such as international sanctions and Islamophobia, and a lack of reciprocity in terms of recognition, have created a sense of stigmatization for veiled and unveiled Iranian female students who are the largest groups of ‘non-European Muslim international students’ enrolled in Italian universities. Participants addressed how their nationality has devalued their public image and negatively impacted their self-confidence and self-realization in academia. They highlighted the experience of an unwelcoming atmosphere by different groups of people and institutes, such as receiving marked students’ badges, rejected bank account requests, failed visa processes, secondary security screening selection, and hyper-visibility of veiled students. This study corroborates the need for institutions to pay attention to geopolitical factors and religious diversity in student recruitment and provide support mechanisms and access to basic rights. Accordingly, it is suggested that Higher Education Institutions (HEIs) have a social and moral responsibility towards the discrimination and both social and academic exclusion of Iranian students.

Keywords: Iranian diaspora, female students, recognition theory, inclusive university

Procedia PDF Downloads 73
184 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 166
183 Anaerobic Co-Digestion of Pressmud with Bagasse and Animal Waste for Biogas Production Potential

Authors: Samita Sondhi, Sachin Kumar, Chirag Chopra

Abstract:

The increase in population has resulted in an excessive feedstock production, which has in return lead to the accumulation of a large amount of waste from different resources as crop residues, industrial waste and solid municipal waste. This situation has raised the problem of waste disposal in present days. A parallel problem of depletion of natural fossil fuel resources has led to the formation of alternative sources of energy from the waste of different industries to concurrently resolve the two issues. The biogas is a carbon neutral fuel which has applications in transportation, heating and power generation. India is a nation that has an agriculture-based economy and agro-residues are a significant source of organic waste. Taking into account, the second largest agro-based industry that is sugarcane industry producing a high quantity of sugar and sugarcane waste byproducts such as Bagasse, Press Mud, Vinasse and Wastewater. Currently, there are not such efficient disposal methods adopted at large scales. According to manageability objectives, anaerobic digestion can be considered as a method to treat organic wastes. Press mud is lignocellulosic biomass and cannot be accumulated for Mono digestion because of its complexity. Prior investigations indicated that it has a potential for production of biogas. But because of its biological and elemental complexity, Mono-digestion was not successful. Due to the imbalance in the C/N ratio and presence of wax in it can be utilized with any other fibrous material hence will be digested properly under suitable conditions. In the first batch of Mono-digestion of Pressmud biogas production was low. Now, co-digestion of Pressmud with Bagasse which has desired C/N ratio will be performed to optimize the ratio for maximum biogas from Press mud. In addition, with respect to supportability, the main considerations are the monetary estimation of item result and ecological concerns. The work is designed in such a way that the waste from the sugar industry will be digested for maximum biogas generation and digestive after digestion will be characterized for its use as a bio-fertilizer for soil conditioning. Due to effectiveness demonstrated by studied setups of Mono-digestion and Co-digestion, this approach can be considered as a viable alternative for lignocellulosic waste disposal and in agricultural applications. Biogas produced from the Pressmud either can be used for Powerhouses or transportation. In addition, the work initiated towards the development of waste disposal for energy production will demonstrate balanced economy sustainability of the process development.

Keywords: anaerobic digestion, carbon neutral fuel, press mud, lignocellulosic biomass

Procedia PDF Downloads 169
182 A Randomized, Controlled Trial to Test Behavior Change Techniques to Improve Low Intensity Physical Activity in Older Adults

Authors: Ciaran Friel, Jerry Suls, Mark Butler, Patrick Robles, Samantha Gordon, Frank Vicari, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant engagement. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit® fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. In the 8-week intervention phase of the study, participants received each of the four BCTs, in random order, for a two-week period. Text message prompts were delivered daily each morning at a consistent time. All prompts required participant engagement to acknowledge receipt of the BCT message. Engagement is dependent upon the BCT message and may have included recording that a detailed plan for walking has been made or confirmed a daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves to their baseline average step count (self-monitoring, feedback). At the end of each two-week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances, and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. The analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, five days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of an N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 92
181 Improving Preconception Health and Lifestyle Behaviours through Digital Health Intervention: The OptimalMe Program

Authors: Bonnie R. Brammall, Rhonda M. Garad, Helena J. Teede, Cheryce L. Harrison

Abstract:

Introduction: Reproductive aged women are at high-risk for accelerated weight gain and obesity development, with pregnancy recognised as a critical contributory life phase. Healthy lifestyle interventions during the preconception and antenatal period improve maternal and infant health outcomes. Yet, interventions from preconception through to postpartum and translation and implementation into real-world healthcare settings remain limited. OptimalMe is a randomised, hybrid implementation effectiveness study of evidence-based healthy lifestyle intervention. Here, we report engagement, acceptability of the intervention during preconception, and self-reported behaviour change outcomes as a result of the preconception phase of the intervention. Methods: Reproductive aged women who upgraded their private health insurance to include pregnancy and birth cover, signalling a pregnancy intention, were invited to participate. Women received access to an online portal with preconception health and lifestyle modules, goal-setting and behaviour change tools, monthly SMS messages, and two coaching sessions (randomised to video or phone) prior to pregnancy. Results: Overall n=527 expressed interest in participating. Of these, n=33 did not meet inclusion criteria, n=8 were not contactable for eligibility screening, and n=177 failed to engage after the screening, leaving n=309 who were enrolled in OptimalMe and randomised to intervention delivery method. Engagement with coaching sessions dropped by 25% for session two, with no difference between intervention groups. Women had a mean (SD) age of 31.7 (4.3) years and, at baseline, a self-reported mean BMI of 25.7 (6.1) kg/m², with 55.8% (n=172) of a healthy BMI. Behaviour was sub-optimal with infrequent self-weighing (38.1%), alcohol consumption prevalent (57.1%), sub-optimal pre-pregnancy supplementation (61.5%), and incomplete medical screening. Post-intervention 73.2% of women reported engagement with a GP for preconception care and improved lifestyle behaviour (85.5%), since starting OptimalMe. Direct pre-and-post comparison of individual participant data showed that of 322 points of potential change (up-to-date cervical screening, elimination of high-risk behaviours [alcohol, drugs, smoking], uptake of preconception supplements and improved weighing habits) 158 (49.1%) points of change were achieved. Health coaching sessions were found to improve accountability and confidence, yet further personalisation and support were desired. Engagement with video and phone sessions was comparable, having similar impacts on behaviour change, and both methods were well accepted and increased women's accountability. Conclusion: A low-intensity digital health and lifestyle program with embedded health coaching can improve the uptake of preconception care and lead to self-reported behaviour change. This is the first program of its kind to reach an otherwise healthy population of women planning a pregnancy. Women who were otherwise healthy showed divergence from preconception health and lifestyle objectives and benefited from the intervention. OptimalMe shows promising results for population-based behaviour change interventions that can improve preconception lifestyle habits and increase engagement with clinical health care for pregnancy preparation.

Keywords: preconception, pregnancy, preventative health, weight gain prevention, self-management, behaviour change, digital health, telehealth, intervention, women's health

Procedia PDF Downloads 91
180 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
179 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 302
178 Estimation of Rock Strength from Diamond Drilling

Authors: Hing Hao Chan, Thomas Richard, Masood Mostofi

Abstract:

The mining industry relies on an estimate of rock strength at several stages of a mine life cycle: mining (excavating, blasting, tunnelling) and processing (crushing and grinding), both very energy-intensive activities. An effective comminution design that can yield significant dividends often requires a reliable estimate of the material rock strength. Common laboratory tests such as rod, ball mill, and uniaxial compressive strength share common shortcomings such as time, sample preparation, bias in plug selection cost, repeatability, and sample amount to ensure reliable estimates. In this paper, the authors present a methodology to derive an estimate of the rock strength from drilling data recorded while coring with a diamond core head. The work presented in this paper builds on a phenomenological model of the bit-rock interface proposed by Franca et al. (2015) and is inspired by the now well-established use of the scratch test with PDC (Polycrystalline Diamond Compact) cutter to derive the rock uniaxial compressive strength. The first part of the paper introduces the phenomenological model of the bit-rock interface for a diamond core head that relates the forces acting on the drill bit (torque, axial thrust) to the bit kinematic variables (rate of penetration and angular velocity) and introduces the intrinsic specific energy or the energy required to drill a unit volume of rock for an ideally sharp drilling tool (meaning ideally sharp diamonds and no contact between the bit matrix and rock debris) that is found well correlated to the rock uniaxial compressive strength for PDC and roller cone bits. The second part describes the laboratory drill rig, the experimental procedure that is tailored to minimize the effect of diamond polishing over the duration of the experiments, and the step-by-step methodology to derive the intrinsic specific energy from the recorded data. The third section presents the results and shows that the intrinsic specific energy correlates well to the uniaxial compressive strength for the 11 tested rock materials (7 sedimentary and 4 igneous rocks). The last section discusses best drilling practices and a method to estimate the rock strength from field drilling data considering the compliance of the drill string and frictional losses along the borehole. The approach is illustrated with a case study from drilling data recorded while drilling an exploration well in Australia.

Keywords: bit-rock interaction, drilling experiment, impregnated diamond drilling, uniaxial compressive strength

Procedia PDF Downloads 137
177 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 148
176 Evaluation of Soil Erosion Risk and Prioritization for Implementation of Management Strategies in Morocco

Authors: Lahcen Daoudi, Fatima Zahra Omdi, Abldelali Gourfi

Abstract:

In Morocco, as in most Mediterranean countries, water scarcity is a common situation because of low and unevenly distributed rainfall. The expansions of irrigated lands, as well as the growth of urban and industrial areas and tourist resorts, contribute to an increase of water demand. Therefore in the 1960s Morocco embarked on an ambitious program to increase the number of dams to boost water retention capacity. However, the decrease in the capacity of these reservoirs caused by sedimentation is a major problem; it is estimated at 75 million m3/year. Dams and reservoirs became unusable for their intended purposes due to sedimentation in large rivers that result from soil erosion. Soil erosion presents an important driving force in the process affecting the landscape. It has become one of the most serious environmental problems that raised much interest throughout the world. Monitoring soil erosion risk is an important part of soil conservation practices. The estimation of soil loss risk is the first step for a successful control of water erosion. The aim of this study is to estimate the soil loss risk and its spatial distribution in the different fields of Morocco and to prioritize areas for soil conservation interventions. The approach followed is the Revised Universal Soil Loss Equation (RUSLE) using remote sensing and GIS, which is the most popular empirically based model used globally for erosion prediction and control. This model has been tested in many agricultural watersheds in the world, particularly for large-scale basins due to the simplicity of the model formulation and easy availability of the dataset. The spatial distribution of the annual soil loss was elaborated by the combination of several factors: rainfall erosivity, soil erodability, topography, and land cover. The average annual soil loss estimated in several basins watershed of Morocco varies from 0 to 50t/ha/year. Watersheds characterized by high-erosion-vulnerability are located in the North (Rif Mountains) and more particularly in the Central part of Morocco (High Atlas Mountains). This variation of vulnerability is highly correlated to slope variation which indicates that the topography factor is the main agent of soil erosion within these basin catchments. These results could be helpful for the planning of natural resources management and for implementing sustainable long-term management strategies which are necessary for soil conservation and for increasing over the projected economic life of the dam implemented.

Keywords: soil loss, RUSLE, GIS-remote sensing, watershed, Morocco

Procedia PDF Downloads 461
175 Comparisons of Drop Jump and Countermovement Jump Performance for Male Basketball Players with and without Low-Dye Taping Application

Authors: Chung Yan Natalia Yeung, Man Kit Indy Ho, Kin Yu Stan Chan, Ho Pui Kipper Lam, Man Wah Genie Tong, Tze Chung Jim Luk

Abstract:

Excessive foot pronation is a well-known risk factor of knee and foot injuries such as patellofemoral pain, patellar and Achilles tendinopathy, and plantar fasciitis. Low-Dye taping (LDT) application is not uncommon for basketball players to control excessive foot pronation for pain control and injury prevention. The primary potential benefits of using LDT include providing additional supports to medial longitudinal arch and restricting the excessive midfoot and subtalar motion in weight-bearing activities such as running and landing. Meanwhile, restrictions provided by the rigid tape may also potentially limit functional joint movements and sports performance. Coaches and athletes need to weigh the potential benefits and harmful effects before making a decision if applying LDT technique is worthwhile or not. However, the influence of using LDT on basketball-related performance such as explosive and reactive strength is not well understood. Therefore, the purpose of this study was to investigate the change of drop jump (DJ) and countermovement jump (CMJ) performance before and after LDT application for collegiate male basketball players. In this within-subject crossover study, 12 healthy male basketball players (age: 21.7 ± 2.5 years) with at least 3-year regular basketball training experience were recruited. Navicular drop (ND) test was adopted as the screening and only those with excessive pronation (ND ≥ 10mm) were included. Participants with recent lower limb injury history were excluded. Recruited subjects were required to perform both ND, DJ (on a platform of 40cm height) and CMJ (without arms swing) tests in series during taped and non-taped conditions in the counterbalanced order. Reactive strength index (RSI) was calculated by using the flight time divided by the ground contact time measured. For DJ and CMJ tests, the best of three trials was used for analysis. The difference between taped and non-taped conditions for each test was further calculated through standardized effect ± 90% confidence intervals (CI) with clinical magnitude-based inference (MBI). Paired samples T-test showed significant decrease in ND (-4.68 ± 1.44mm; 95% CI: -3.77, -5.60; p < 0.05) while MBI demonstrated most likely beneficial and large effect (standardize effect: -1.59 ± 0.27) in LDT condition. For DJ test, significant increase in both flight time (25.25 ± 29.96ms; 95% CI: 6.22, 44.28; p < 0.05) and RSI (0.22 ± 0.22; 95% CI: 0.08, 0.36; p < 0.05) were observed. In taped condition, MBI showed very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.49) in flight time, possibly beneficial and small effect (standardized effect: -0.26 ± 0.29) in ground contact time and very likely beneficial and moderate effect (standardized effect: 0.77 ± 0.42) in RSI. No significant difference in CMJ was observed (95% CI: -2.73, 2.08; p > 0.05). For basketball players with pes planus, applying LDT could substantially support the foot by elevating the navicular height and potentially provide acute beneficial effects in reactive strength performance. Meanwhile, no significant harmful effect on CMJ was observed. Basketball players may consider applying LDT before the game or training to enhance the reactive strength performance. However since the observed effects in this study could not generalize to other players without excessive foot pronation, further studies on players with normal foot arch or navicular height are recommended.

Keywords: flight time, pes planus, pronated foot, reactive strength index

Procedia PDF Downloads 155
174 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 483
173 Modeling Diel Trends of Dissolved Oxygen for Estimating the Metabolism in Pristine Streams in the Brazilian Cerrado

Authors: Wesley A. Saltarelli, Nicolas R. Finkler, Adriana C. P. Miwa, Maria C. Calijuri, Davi G. F. Cunha

Abstract:

The metabolism of the streams is an indicator of ecosystem disturbance due to the influences of the catchment on the structure of the water bodies. The study of the respiration and photosynthesis allows the estimation of energy fluxes through the food webs and the analysis of the autotrophic and heterotrophic processes. We aimed at evaluating the metabolism in streams located in the Brazilian savannah, Cerrado (Sao Carlos, SP), by determining and modeling the daily changes of dissolved oxygen (DO) in the water during one year. Three water bodies with minimal anthropogenic interference in their surroundings were selected, Espraiado (ES), Broa (BR) and Canchim (CA). Every two months, water temperature, pH and conductivity are measured with a multiparameter probe. Nitrogen and phosphorus forms are determined according to standard methods. Also, canopy cover percentages are estimated in situ with a spherical densitometer. Stream flows are quantified through the conservative tracer (NaCl) method. For the metabolism study, DO (PME-MiniDOT) and light (Odyssey Photosynthetic Active Radiation) sensors log data for at least three consecutive days every ten minutes. The reaeration coefficient (k2) is estimated through the method of the tracer gas (SF6). Finally, we model the variations in DO concentrations and calculate the rates of gross and net primary production (GPP and NPP) and respiration based on the one station method described in the literature. Three sampling were carried out in October and December 2015 and February 2016 (the next will be in April, June and August 2016). The results from the first two periods are already available. The mean water temperatures in the streams were 20.0 +/- 0.8C (Oct) and 20.7 +/- 0.5C (Dec). In general, electrical conductivity values were low (ES: 20.5 +/- 3.5uS/cm; BR 5.5 +/- 0.7uS/cm; CA 33 +/- 1.4 uS/cm). The mean pH values were 5.0 (BR), 5.7 (ES) and 6.4 (CA). The mean concentrations of total phosphorus were 8.0ug/L (BR), 66.6ug/L (ES) and 51.5ug/L (CA), whereas soluble reactive phosphorus concentrations were always below 21.0ug/L. The BR stream had the lowest concentration of total nitrogen (0.55mg/L) as compared to CA (0.77mg/L) and ES (1.57mg/L). The average discharges were 8.8 +/- 6L/s (ES), 11.4 +/- 3L/s and CA 2.4 +/- 0.5L/s. The average percentages of canopy cover were 72% (ES), 75% (BR) and 79% (CA). Significant daily changes were observed in the DO concentrations, reflecting predominantly heterotrophic conditions (respiration exceeded the gross primary production, with negative net primary production). The GPP varied from 0-0.4g/m2.d (in Oct and Dec) and the R varied from 0.9-22.7g/m2.d (Oct) and from 0.9-7g/m2.d (Dec). The predominance of heterotrophic conditions suggests increased vulnerability of the ecosystems to artificial inputs of organic matter that would demand oxygen. The investigation of the metabolism in the pristine streams can help defining natural reference conditions of trophic state.

Keywords: low-order streams, metabolism, net primary production, trophic state

Procedia PDF Downloads 258
172 Using AI Based Software as an Assessment Aid for University Engineering Assignments

Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth

Abstract:

As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.

Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)

Procedia PDF Downloads 122
171 Examining the Effects of National Disaster on the Performance of Hospitality Industry in Korea

Authors: Kim Sang Hyuck, Y. Park Sung

Abstract:

The outbreak of national disasters stimulates the decrease of the both internal and domestic tourism demands, causing bad effects on the hospitality industry. The effective and efficient risk management regarding national disasters are being increasingly required from the hospitality industry practitioners and the tourism policymakers. To establish the effective and efficient risk management strategy on national disasters, the most essential prerequisite condition is the correct estimation of national disasters’ effects in terms of the size and duration of the damages occurred from national disaster on hospitality industry. More specifically, the national disasters are twofold: natural disaster and social disaster. In addition, the hospitality industry has consisted of several types of business, such as hotel, restaurant, travel agency, etc. As reasons of the above, it is important to consider how each type of national disasters differently influences on the performance of each type of hospitality industry. Therefore, the purpose of this study is examining the effects of national disaster on hospitality industry in Korea based on the types of national disasters as well as the types of hospitality business. The monthly data was collected from Jan. 2000 to Dec. 2016. The indexes of industrial production for each hospitality industry in Korea were used with the proxy variable for the performance of each hospitality industry. Two national disaster variables (natural disaster and social disaster) were treated as dummy variables. In addition, the exchange rate, industrial production index, and consumer price index were used as control variables in the research model. The impulse response analysis was used to examine the size and duration of the damages occurred from each type of national disaster on each type of hospitality industries. The results of this study show that the natural disaster and the social disaster differently influenced on each type of hospitality industry. More specifically, the performance of airline industry is negatively influenced by the natural disaster at the time of 3 months later from the incidence. However, the negative impacts of social disaster on airline industry occurred not significantly over the time periods. For the hotel industry, both natural disaster and social disaster negatively influence the performance of hotel industry at the time of 5 months and 6 months later, respectively. Also, the negative impact of natural disaster on the performance of restaurant industry occurred at the time of 5 months later, as well as for both 3 months and 6 months later for the social disaster. Finally, both natural disaster and social disaster negatively influence the performance of travel agency at the time of 3 months and 4 months later, respectively. In conclusion, the types of national disasters differently influence the performance of each type of hospitality industry in Korea. These results would provide an important information to establish the effective and efficient risk management strategy for the national disasters.

Keywords: impulse response analysis, Korea, national disaster, performance of hospitality industry

Procedia PDF Downloads 184
170 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition

Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed

Abstract:

Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.

Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil

Procedia PDF Downloads 337
169 The Influence of English Immersion Program on Academic Performance: Case Study at a Sino-US Cooperative University in China

Authors: Leah Li Echiverri, Haoyu Shang, Yue Li

Abstract:

Wenzhou-Kean University (WKU) is a Sino-US Cooperative University in China. It practices the English Immersion Program (EIP), where all the courses are taught in English. Class discussions and presentations are pervasively interwoven in designing students’ learning experiences. This WKU model has brought positive influences on students and is in some way ahead of traditional college English majors. However, literature to support the perceptions on the positive outcomes of this teaching and learning model remain scarce. The distinctive profile of Chinese-ESL students in an English Medium of Instruction (EMI) environment contributes further to the scarcity of literature compared to existing studies conducted among ESL learners in Western educational settings. Hence, the study investigated the students’ perceptions towards the English Immersion Program and determine how it influences Chinese-ESL students’ academic performance (AP). This research can provide empirical data that would be helpful to educators, teaching practitioners, university administrators, and other researchers in making informed decisions when developing curricular reforms, instructional and pedagogical methods, and university-wide support programs using this educational model. The purpose of the study was to establish the relationship between the English Immersion Program and Academic Performance among Chinese-ESL students enrolled at WKU for the academic year 2020-2021. Course length, immersion location, course type, and instructional design were the constructs of the English immersion program. English language learning, learning efficiency, and class participation were used to measure academic performance. Descriptive-correlational design was used in this cross-sectional research project. A quantitative approach for data analysis was applied to determine the relationship between the English immersion program and Chinese-ESL students’ academic performance. The research was conducted at WKU; a Chinese-American jointly established higher educational institution located in Wenzhou, Zhejiang province. Convenience, random, and snowball sampling of 283 students, a response rate of 10.5%, were applied to represent the WKU student population. The questionnaire was posted through the survey website named Wenjuanxing and shared to QQ or WeChat. Cronbach’s alpha was used to test the reliability of the research instrument. Findings revealed that when professors integrate technology (PowerPoint, videos, and audios) in teaching, students pay more attention. This contributes to the acquisition of more professional knowledge in their major courses. As to course immersion, students perceive WKU as a good place to study, providing them a high degree of confidence to talk with their professors in English. This also contributes to their English fluency and better pronunciation in their communication. In the construct of designing instruction, the use of pictures, video clips, and professors’ non-verbal communication, and demonstration of concern for students encouraged students to be more active in-class participation. Findings on course length and academic performance indicated that students’ perception regarding taking courses during fall and spring terms can moderately contribute to their academic performance. In conclusion, the findings revealed a significantly strong positive relationship between course type, immersion location, instructional design, and academic performance.

Keywords: class participation, English immersion program, English language learning, learning efficiency

Procedia PDF Downloads 174
168 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights

Procedia PDF Downloads 115
167 The Potential Fresh Water Resources of Georgia and Sustainable Water Management

Authors: Nana Bolashvili, Vakhtang Geladze, Tamazi Karalashvili, Nino Machavariani, George Geladze, Davit Kartvelishvili, Ana Karalashvili

Abstract:

Fresh water is the major natural resource of Georgia. The average perennial sum of the rivers' runoff in Georgia is 52,77 km³, out of which 9,30 km³ inflows from abroad. The major volume of transit river runoff is ascribed to the Chorokhi river. Average perennial runoff in Western Georgia is 41,52 km³, in Eastern Georgia 11,25 km³. The indices of Eastern and Western Georgia were calculated with 50% and 90% river runoff respectively, while the same index calculation for other countries is based on a 50% river runoff. Out of total volume of resources, 133,2 m³/sec (4,21 km³) has been geologically prospected by the State Commission on Reserves and Acknowledged as reserves available for exploitation, 48% (2,02 km³) of which is in Western Georgia and 2,19 km³ in Eastern Georgia. Considering acknowledged water reserves of all categories per capita water resources accounts to 2,2 m³/day, whereas high industrial category -0. 88 m³ /day fresh drinking water. According to accepted norms, the possibility of using underground water reserves is 2,5 times higher than the long-term requirements of the country. The volume of abundant fresh-water reserves in Georgia is about 150 m³/sec (4,74 km³). Water in Georgia is consumed mostly in agriculture for irrigation purposes. It makes 66,4% around Georgia, in Eastern Georgia 72,4% and 38% in Western Georgia. According to the long-term forecast provision of population and the territory with water resources in Eastern Georgia will be quite normal. A bit different is the situation in the lower reaches of the Khrami and Iori rivers which could be easily overcome by corresponding financing. The present day irrigation system in Georgia does not meet the modern technical requirements. The overall efficiency of their majority varies between 0,4-0,6. Similar is the situation in the fresh water and public service water consumption. Organization of the mentioned systems, installation of water meters, introduction of new methods of irrigation without water loss will substantially increase efficiency of water use. Besides new irrigation norms developed from agro-climatic, geographical and hydrological angle will significantly reduce water waste. Taking all this into account we assume that for irrigation agricultural lands in Georgia is necessary 6,0 km³ water, 5,5 km³ of which goes to Eastern Georgia on irrigation arable areas. To increase water supply in Eastern Georgian territory and its population is possible by means of new water reservoirs as the runoff of every river considerably exceeds the consumption volume. In conclusion, we should say that fresh water resources by which Georgia is that rich could be significant source for barter exchange and investment attraction. Certain volume of fresh water can be exported from Western Georgia quite trouble free, without bringing any damage to population and hydroecosystems. The precise volume of exported water per region/time and method/place of water consumption should be defined after the estimation of different hydroecosystems and detailed analyses of water balance of the corresponding territories.

Keywords: GIS, management, rivers, water resources

Procedia PDF Downloads 369
166 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup

Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi

Abstract:

Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.

Keywords: gender, losers, shoot-statistic, U19, winners

Procedia PDF Downloads 97
165 Book Exchange System with a Hybrid Recommendation Engine

Authors: Nilki Upathissa, Torin Wirasinghe

Abstract:

This solution addresses the challenges faced by traditional bookstores and the limitations of digital media, striking a balance between the tactile experience of printed books and the convenience of modern technology. The book exchange system offers a sustainable alternative, empowering users to access a diverse range of books while promoting community engagement. The user-friendly interfaces incorporated into the book exchange system ensure a seamless and enjoyable experience for users. Intuitive features for book management, search, and messaging facilitate effortless exchanges and interactions between users. By streamlining the process, the system encourages readers to explore new books aligned with their interests, enhancing the overall reading experience. Central to the system's success is the hybrid recommendation engine, which leverages advanced technologies such as Long Short-Term Memory (LSTM) models. By analyzing user input, the engine accurately predicts genre preferences, enabling personalized book recommendations. The hybrid approach integrates multiple technologies, including user interfaces, machine learning models, and recommendation algorithms, to ensure the accuracy and diversity of the recommendations. The evaluation of the book exchange system with the hybrid recommendation engine demonstrated exceptional performance across key metrics. The high accuracy score of 0.97 highlights the system's ability to provide relevant recommendations, enhancing users' chances of discovering books that resonate with their interests. The commendable precision, recall, and F1score scores further validate the system's efficacy in offering appropriate book suggestions. Additionally, the curve classifications substantiate the system's effectiveness in distinguishing positive and negative recommendations. This metric provides confidence in the system's ability to navigate the vast landscape of book choices and deliver recommendations that align with users' preferences. Furthermore, the implementation of this book exchange system with a hybrid recommendation engine has the potential to revolutionize the way readers interact with printed books. By facilitating book exchanges and providing personalized recommendations, the system encourages a sense of community and exploration within the reading community. Moreover, the emphasis on sustainability aligns with the growing global consciousness towards eco-friendly practices. With its robust technical approach and promising evaluation results, this solution paves the way for a more inclusive, accessible, and enjoyable reading experience for book lovers worldwide. In conclusion, the developed book exchange system with a hybrid recommendation engine represents a progressive solution to the challenges faced by traditional bookstores and the limitations of digital media. By promoting sustainability, widening access to printed books, and fostering engagement with reading, this system addresses the evolving needs of book enthusiasts. The integration of user-friendly interfaces, advanced machine learning models, and recommendation algorithms ensure accurate and diverse book recommendations, enriching the reading experience for users.

Keywords: recommendation systems, hybrid recommendation systems, machine learning, data science, long short-term memory, recurrent neural network

Procedia PDF Downloads 94
164 When It Wasn’t There: Understanding the Importance of High School Sports

Authors: Karen Chad, Louise Humbert, Kenzie Friesen, Dave Sandomirsky

Abstract:

Background: The pandemic of COVID-19 presented many historical challenges to the sporting community. For organizations and individuals, sport was put on hold resulting in social, economic, physical, and mental health consequences for all involved. High school sports are seen as an effective and accessible pathway for students to receive health, social, and academic benefits. Studies examining sport cessation due to COVID-19 found substantial negative outcomes on the physical and mental well-being of participants in the high school setting. However, the pandemic afforded an opportunity to examine sport participation and the value people place upon their engagement in high school sport. Study objectives: (1) Examine the experiences of students, parents, administrators, officials, and coaches during a year without high school sports; (2) Understand why participants are involved in high school sports; and (3) Learn what supports are needed for future involvement. Methodology: A mixed method design was used, including semi-structured interviews and a survey (SurveyMonkey software), which was disseminated electronically to high school students, coaches, school administrators, parents, and officials. Results: 1222 respondents completed the survey. Findings showed: (1) 100% of students participate in high school sports to improve their mental health, with >95% said it keeps them active and healthy, helps them make friends and teaches teamwork, builds confidence and positive self-perceptions, teaches resiliency, enhances connectivity to their school, and supports academic learning; (2) Top three reasons teachers coach is their desire to make a difference in the lives of students, enjoyment, and love of the sport, and to give back. Teachers said what they enjoy most is contributing to and watching athletes develop, direct involvement with student sport success, and the competitiveatmosphere; (3) 90% of parents believe playing sports is a valuable experience for their child, 95% said it enriches student academic learning and educational experiences, and 97% encouraged their child to play school sports; (4) Officials participate because of their enjoyment and love of the sport, experience, and expertise, desire to make a difference in the lives of children, the competitive/sporting atmosphere and growing the sport. 4% of officials said it was financially motivated; (5) 100% of administrators said high school sports are important for everyone. 80% believed the pandemic will decrease teachers coaching and increase student mental health and well-being. When there was no sport, many athletes got a part-time job and tried to stay active, with limited success. Coaches, officials, and parents spent more time with family. All participants did little physical activity, were bored; and struggled with mental health and poor physical health. Respondents recommended better communication, promotion, and branding of high school sport benefits, equitable funding for all sports, athlete development, compensation and recognition for coaching, and simple processes to strengthen the high school sport model. Conclusions: High school sport is an effective vehicle for athletes, parents, coaches, administrators, and officials to derive many positive outcomes. When it is taken away, serious consequences prevail. Paying attention to important success factors will be important for the effectiveness of high school sports.

Keywords: physical activity, high school, sports, pandemic

Procedia PDF Downloads 145
163 A Regulator's Assessment of Consumer Risk When Evaluating a User Test for an Umbrella Brand Name in an over the Counter Medicine

Authors: A. Bhatt, C. Bassi, H. Farragher, J. Musk

Abstract:

Background: All medicines placed on the EU market are legally required to be accompanied by labeling and package leaflet, which provide comprehensive information, enabling its safe and appropriate use. Mock-ups with results of assessments using a target patient group must be submitted for a marketing authorisation application. Consumers need confidence in non-prescription, OTC medicines in order to manage their minor ailments and umbrella brands assist purchasing decisions by assisting easy identification within a particular therapeutic area. A number of regulatory agencies have risk management tools and guidelines to assist in developing umbrella brands for OTC medicines, however assessment and decision making is subjective and inconsistent. This study presents an evaluation in the UK following the US FDA warning concerning methaemoglobinaemia following 21 reported cases (11 children under 2 years) caused by OTC oral analgesics containing benzocaine. METHODS: A standard face to face, 25 structured task based user interview testing methodology using a standard questionnaire and rating scale in consumers aged 15-91 years, was conducted independently between June and October 2015 in their homes. Whether individuals could discriminate between the labelling, safety information and warnings on cartons and PILs between 3 different OTC medicines packs with the same umbrella name was evaluated. Each pack was presented with differing information hierarchy using, different coloured cartons, containing the 3 different active ingredients, benzocaine (oromucosal spray) and two lozenges containing 2, 4, dichlorobenzyl alcohol, amylmetacresol and hexylresorcinol respectively (for the symptomatic relief of sore throat pain). The test was designed to determine whether warnings on the carton and leaflet were prominent, accessible to alert users that one product contained benzocaine, risk of methaemoglobinaemia, and refer to the leaflet for the signs of the condition and what to do should this occur. Results: Two consumers did not locate the warnings on the side of the pack, eventually found them on the back and two suggestions to further improve accessibility of the methaemoglobinaemia warning. Using a gold pack design for the oromucosal spray, all consumers could differentiate between the 3 drugs, minimum age particulars, pharmaceutical form and the risk factor methaemoglobinaemia. The warnings for benzocaine were deemed to be clear or very clear; appearance of the 3 packs were either very well differentiated or quite well differentiated. The PIL test passed on all criteria. All consumers could use the product correctly, identify risk factors ensuring the critical information necessary for the safe use was legible and easily accessible so that confusion and errors were minimised. Conclusion: Patients with known methaemoglobinaemia are likely to be vigilant in checking for benzocaine containing products, despite similar umbrella brand names across a range of active ingredients. Despite these findings, the package design and spray format were not deemed to be sufficient to mitigate potential safety risks associated with differences in target populations and contraindications when submitted to the Regulatory Agency. Although risk management tools are increasingly being used by agencies to assist in providing objective assurance of package safety, further transparency, reduction in subjectivity and proportionate risk should be demonstrated.

Keywords: labelling, OTC, risk, user testing

Procedia PDF Downloads 309
162 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 73
161 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 144
160 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 288
159 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 236
158 Evaluation of Antidiabetic Activity of a Combination Extract of Nigella Sativa & Cinnamomum Cassia in Streptozotocin Induced Type-I Diabetic Rats

Authors: Ginpreet Kaur, Mohammad Yasir Usmani, Mohammed Kamil Khan

Abstract:

Diabetes mellitus is a disease with a high global burden and results in significant morbidity and mortality. In India, the number of people suffering with diabetes is expected to rise from 19 to 57 million in 2025. At present, interest in herbal remedies is growing to reduce the side effects associated with conventional dosage form like oral hypoglycemic agents and insulin for the treatment of diabetes mellitus. Our aim was to investigate the antidiabetic activities of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats. Thus, the present study was undertaken to screen postprandial glucose excursion potential through α- glucosidase inhibitory activity (In Vitro) and effect of combinatorial extract of N. sativa & C. cassia in Streptozotocin induced type-I Diabetic Rats (In Vivo). In addition changes in body weight, plasma glucose, lipid profile and kidney profile were also determined. The IC50 values for both extract and Acarbose was calculated by extrapolation method. Combinatorial extract of N. sativa & C. cassia at different dosages (100 and 200 mg/kg orally) and Metformin (50 mg/kg orally) as the standard drug was administered for 28 days and then biochemical estimation, body weights and OGTT (Oral glucose tolerance test) were determined. Histopathological studies were also performed on kidney and pancreatic tissue. In In-Vitro the combinatorial extract shows much more inhibiting effect than the individual extracts. The results reveals that combinatorial extract of N. sativa & C. cassia has shown significant decrease in plasma glucose (p<0.0001), total cholesterol and LDL levels when compared with the STZ group The decreasing level of BUN and creatinine revealed the protection of N. sativa & C. cassia extracts against nephropathy associated with diabetes. Combination of N. sativa & C. cassia significantly improved glucose tolerance to exogenously administered glucose (2 g/kg) after 60, 90 and 120 min interval on OGTT in high dose streptozotocin induced diabetic rats compared with the untreated control group. Histopathological studies shown that treatment with N. sativa & C. cassia extract alone and in combination restored pancreatic tissue integrity and was able to regenerate the STZ damaged pancreatic β cells. Thus, the present study reveals that combination of N. sativa & C. cassia extract has significant α- glucosidase inhibitory activity and thus has great potential as a new source for diabetes treatment.

Keywords: lipid levels, OGTT, diabetes, herbs, glucosidase

Procedia PDF Downloads 430