Search results for: dual-energy computed tomography
33 The Efficacy of Government Strategies to Control COVID 19: Evidence from 22 High Covid Fatality Rated Countries
Authors: Imalka Wasana Rathnayaka, Rasheda Khanam, Mohammad Mafizur Rahman
Abstract:
TheCOVID-19 pandemic has created unprecedented challenges to both the health and economic states in countries around the world. This study aims to evaluate the effectiveness of governments' decisions to mitigate the risks of COVID-19 through proposing policy directions to reduce its magnitude. The study is motivated by the ongoing coronavirus outbreaks and comprehensive policy responses taken by countries to mitigate the spread of COVID-19 and reduce death rates. This study contributes to filling the knowledge by exploiting the long-term efficacy of extensive plans of governments. This study employs a Panel autoregressive distributed lag (ARDL) framework. The panels incorporate both a significant number of variables and fortnightly observations from22 countries. The dependent variables adopted in this study are the fortnightly death rates and the rates of the spread of COVID-19. Mortality rate and the rate of infection data were computed based on the number of deaths and the number of new cases per 10000 people.The explanatory variables are fortnightly values of indexes taken to investigate the efficacy of government interventions to control COVID-19. Overall government response index, Stringency index, Containment and health index, and Economic support index were selected as explanatory variables. The study relies on the Oxford COVID-19 Government Measure Tracker (OxCGRT). According to the procedures of ARDL, the study employs (i) the unit root test to check stationarity, (ii) panel cointegration, and (iii) PMG and ARDL estimation techniques. The study shows that the COVID-19 pandemic forced immediate responses from policymakers across the world to mitigate the risks of COVID-19. Of the four types of government policy interventions: (i) Stringency and (ii) Economic Support have been most effective and reveal that facilitating Stringency and financial measures has resulted in a reduction in infection and fatality rates, while (iii) Government responses are positively associated with deaths but negatively with infected cases. Even though this positive relationship is unexpected to some extent in the long run, social distancing norms of the governments have been broken by the public in some countries, and population age demographics would be a possible reason for that result. (iv) Containment and healthcare improvements reduce death rates but increase the infection rates, although the effect has been lower (in absolute value). The model implies that implementation of containment health practices without association with tracing and individual-level quarantine does not work well. The policy implication based on containment health measures must be applied together with targeted, aggressive, and rapid containment to extensively reduce the number of people infected with COVID 19. Furthermore, the results demonstrate that economic support for income and debt relief has been the key to suppressing the rate of COVID-19 infections and fatality rates.Keywords: COVID-19, infection rate, deaths rate, government response, panel data
Procedia PDF Downloads 7632 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 23131 Identifying Common Sports Injuries in Karate and Presenting a Model for Preventing Identified Injuries (A Case Study of East Azerbaijan, Iranian Karatekas)
Authors: Nadia Zahra Karimi Khiavi, Amir Ghiami Rad
Abstract:
Due to the high likelihood of injuries in karate, karatekas' injuries warrant special treatment. This study explores the prevalence of karate injuries in East Azerbaijan, Iran and provides a model for karatekas to use in the prevention of such injuries. This study employs a descriptive approach. Male and female participants with a brown belt or above in either control or non-control styles in East Azerbaijan province are included in the study's statistical population. A statistical sample size of 100 people was computed using the tools employed (smartpls), and the samples were drawn at random from all clubs in the province with the assistance of the Karate Board in order to give a model for the prevention of karate injuries. Information was gathered by means of a survey that made use of the Standard Questionnaire for Australian Sports Medicine Injury Reports. The information is presented in the form of tables and samples, and descriptive statistics were used to organise and summarise the data. Control and non-control independent t-tests were conducted using SPSS version 20, and structural equation modelling (pls) was utilised for injury prevention modelling at a 0.05 level of significance. The results showed that the most common areas of injury among the control groups were the upper limbs (46.15%), lower limbs (34.61%), trunk (15.38%), and head and neck (3.84%). The most common types of injuries were broken bones (34.61%), sprain or strain (23.13%), bruising and contusions (23.13%), trauma to the face and mouth (11.53%), and damage to the nerves (69.69%). Uncontrolled committees are most likely to sustain injuries to the head and neck (33.33%), trunk (25.92%), upper limbs (22.22%), and lower limbs (18.51%). The most common injuries were to the mouth and face (33.33%), dislocations and fractures (22.22%), aspirin and strain (22.22%), bruises and contusions (18.51%), and nerves (70%), in that order. Among those who practice control kata, injuries to the upper limb account for 45.83%, the lower limb for 41.666%, the trunk for 8.33%, and the head and neck for 4.166%. The most common types of injuries are dislocations and fractures (41.66 per cent), aspirin and strain (29.16 per cent), bruising and bruises (16.66 per cent), and nerves (12.5%). Injuries to the face and mouth were not reported among those practising the control kata. By far, the most common sites of injury for those practising uncontrolled kata were the lower limb (43.74%), upper limb (39.13%), trunk (13.14%), and head and neck (4.34%). The most common types of injuries were dislocations and fractures (34.82%), aspirin and strain (26.08%), bruises and contusions (21.73%), mouth and face (13.14%), and nerves. Teaching the concepts of cooling and warming (0.591) and enhancing the degree of safety in the sports environment (0.413) were shown to play the most essential roles in reducing sports injuries among karate practitioners of controlling and uncontrolled styles, respectively. Use of common sports gear (0.390), Modification of training programme principles (0.341), Formulation of an effective diet plan for athletes (0.284), Evaluation of athletes' physical anatomy, physiology, chemistry, and physics (0.247).Keywords: sports injuries, karate, prevention, cooling and warming
Procedia PDF Downloads 10130 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 31129 Expanding Entrepreneurial Capabilities through Business Incubators: A Case Study of Idea Hub Nigeria
Authors: Kenechukwu Ikebuaku
Abstract:
Entrepreneurship has long been offered as the panacea for poor economic growth and high rate of unemployment. Business incubation is considered an effective means for enhancing entrepreneurial actitivities while engendering socio-economic development. Information Technology Developers Entrepreneurship Accelerator (iDEA), is a software business incubation programme established by the Nigerian government as a means of boosting digital entrepreneurship activities and reducing unemployment in the country. This study assessed the contribution of iDEA Nigeria’s entrepreneurship programmes towards enhancing the capabilities of its tenants. Using the capability approach and the sustainable livelihoods approach, the study analysed iDEA programmes’ contribution towards the expansion of participants’ entrepreneurial capabilities. Apart from identifying a set of entrepreneurial capabilities from both the literature and empirical analysis, the study went further to ascertain how iDEA incubation has helped to enhance those capabilities for its tenants. It also examined digital entrepreneurship as a valued functioning and as an intermediate functioning leading to other valuable functioning. Furthermore, the study examined gender as a conversion factor in digital entrepreneurship. Both qualitative and quantitative research methods were used for the study, and measurement of key variables was made. While the entire population was utilised to collect data for the quantitative research, purposive sampling was used to select respondents for semi-structured interviews in the qualitative research. However, only 40 beneficiaries agreed to take part in the survey while 10 respondents were interviewed for the study. Responses collected from questionnaires administered were subjected to statistical analysis using SPSS. The study developed indexes to measure the perception of the respondents, on how iDEA programmes have enhanced their entrepreneurial capabilities. The Capabilities Enhancement Perception Index (CEPI) computed indicated that the respondents believed that iDEA programmes enhanced their entrepreneurial capabilities. While access to power supply and reliable internet have the highest positive deviations around mean, negotiation skills and access to customers/clients have the highest negative deviation. These were well supported by the findings of the qualitative analysis in which the participants unequivocally narrated how the resources provided by iDEA aid them in their entrepreneurial endeavours. It was also found that iDEA programmes have a significant effect on the tenants’ access to networking opportunities, both with other emerging entrepreneurs and established entrepreneurs. While assessing gender as a conversion factor, it was discovered that there was very low female participation within the digital entrepreneurship ecosystem. The root cause of this gender disparity was found in unquestioned cultural beliefs and social norms which relegate women to a subservient position and household duties. The findings also showed that many of the entrepreneurs could be considered opportunity-based entrepreneurs rather than necessity entrepreneurs, and that digital entrepreneurship is a valued functioning for iDEA tenants. With regards to challenges facing digital entrepreneurship in Nigeria, infrastructural/institutional inadequacies, lack of funding opportunities, and unfavourable government policies, were considered inimical to entrepreneurial capabilities in the country.Keywords: entrepreneurial capabilities, unemployment, business incubators, development
Procedia PDF Downloads 23628 Localized Recharge Modeling of a Coastal Aquifer from a Dam Reservoir (Korba, Tunisia)
Authors: Nejmeddine Ouhichi, Fethi Lachaal, Radhouane Hamdi, Olivier Grunberger
Abstract:
Located in Cap Bon peninsula (Tunisia), the Lebna dam was built in 1987 to balance local water salt intrusion taking place in the coastal aquifer of Korba. The first intention was to reduce coastal groundwater over-pumping by supplying surface water to a large irrigation system. The unpredicted beneficial effect was recorded with the occurrence of a direct localized recharge to the coastal aquifer by leakage through the geological material of the southern bank of the lake. The hydrological balance of the reservoir dam gave an estimation of the annual leakage volume, but dynamic processes and sound quantification of recharge inputs are still required to understand the localized effect of the recharge in terms of piezometry and quality. Present work focused on simulating the recharge process to confirm the hypothesis, and established a sound quantification of the water supply to the coastal aquifer and extend it to multi-annual effects. A spatial frame of 30km² was used for modeling. Intensive outcrops and geophysical surveys based on 68 electrical resistivity soundings were used to characterize the aquifer 3D geometry and the limit of the Plio-quaternary geological material concerned by the underground flow paths. Permeabilities were determined using 17 pumping tests on wells and piezometers. Six seasonal piezometric surveys on 71 wells around southern reservoir dam banks were performed during the 2019-2021 period. Eight monitoring boreholes of high frequency (15min) piezometric data were used to examine dynamical aspects. Model boundary conditions were specified using the geophysics interpretations coupled with the piezometric maps. The dam-groundwater flow model was performed using Visual MODFLOW software. Firstly, permanent state calibration based on the first piezometric map of February 2019 was established to estimate the permanent flow related to the different reservoir levels. Secondly, piezometric data for the 2019-2021 period were used for transient state calibration and to confirm the robustness of the model. Preliminary results confirmed the temporal link between the reservoir level and the localized recharge flow with a strong threshold effect for levels below 16 m.a.s.l. The good agreement of computed flow through recharge cells on the southern banks and hydrological budget of the reservoir open the path to future simulation scenarios of the dilution plume imposed by the localized recharge. The dam reservoir-groundwater flow-model simulation results approve a potential for storage of up to 17mm/year in existing wells, under gravity-feed conditions during level increases on the reservoir into the three years of operation. The Lebna dam groundwater flow model characterized a spatiotemporal relation between groundwater and surface water.Keywords: leakage, MODFLOW, saltwater intrusion, surface water-groundwater interaction
Procedia PDF Downloads 13827 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 27626 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District
Authors: Abha Gupta, Deepak K. Mishra
Abstract:
Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.Keywords: dietary diversity, food Security, India, socio-economic groups
Procedia PDF Downloads 34025 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 16224 Preliminary Study Investigating Trunk Muscle Fatigue and Cognitive Function in Event Riders during a Simulated Jumping Test
Authors: Alice Carter, Lucy Dumbell, Lorna Cameron, Victoria Lewis
Abstract:
The Olympic discipline of eventing is the triathlon of equestrian sport, consisting of dressage, cross-country and show jumping. Falls on the cross-country are common and can be serious even causing death to rider. Research identifies an increased risk of a fall with an increasing number of obstacles and for jumping efforts later in the course suggesting fatigue maybe a contributing factor. Advice based on anecdotal evidence suggests riders undertake strength and conditioning programs to improve their ‘core’, thus improving their ability to maintain and control their riding position. There is little empirical evidence to support this advice. Therefore, the aim of this study is to investigate truck muscle fatigue and cognitive function during a simulated jumping test. Eight adult riders participated in a riding test on a Racewood Event simulator for 10 minutes, over a continuous jumping programme. The SEMG activity of six trunk muscles were bilaterally measured at every minute, and normalised root mean squares (RMS) and median frequencies (MDF) were computed from the EMG power spectra. Visual analogue scales (VAS) measuring Fatigue and Pain levels and Cognitive Function ‘tapping’ tests were performed before and after the riding test. Average MDF values for all muscles differed significantly between each sampled minute (p = 0.017), however a consistent decrease from Minute 1 and Minute 9 was not found, suggesting the trunk muscles fatigued and then recovered as other muscle groups important in maintaining the riding position during dynamic movement compensated. Differences between the MDF and RMS of different muscles were highly significant (H=213.01, DF=5, p < 0.001), supporting previous anecdotal evidence that different trunk muscles carry out different roles of posture maintenance during riding. RMS values were not significantly different between the sampled minutes or between riders, suggesting the riding test produced a consistent and repeatable effect on the trunk muscles. MDF values differed significantly between riders (H=50.8, DF = 5, p < 0.001), suggesting individuals may experience localised muscular fatigue of the same test differently, and that other parameters of physical fitness should be investigated to provide conclusions. Lumbar muscles were shown to be important in maintaining the position, therefore physical training program should focus on these areas. No significant differences were found between pre- and post-riding test VAS Pain and Fatigue scores or cognitive function test scores, suggesting the riding test was not significantly fatiguing for participants. However, a near significant correlation was found between time of riding test and VAS Pain score (p = 0.06), suggesting somatic pain may be a limiting factor to performance. No other correlations were found between the factors of participant riding test time, VAS Pain and Fatigue, however a larger sample needs to be tested to improve statistical analysis. The findings suggest the simulator riding test was not sufficient to provoke fatigue in the riders, however foundations for future studies have been laid to enable methodologies in realistic eventing settings.Keywords: eventing, fatigue, horse-rider, surface EMG, trunk muscles
Procedia PDF Downloads 19123 Knowledge and Attitude Towards Strabismus Among Adult Residents in Woreta Town, Northwest Ethiopia: A Community-Based Study
Authors: Henok Biruk Alemayehu, Kalkidan Berhane Tsegaye, Fozia Seid Ali, Nebiyat Feleke Adimassu, Getasew Alemu Mersha
Abstract:
Background: Strabismus is a visual disorder where the eyes are misaligned and point in different directions. Untreated strabismus can lead to amblyopia, loss of binocular vision, and social stigma due to its appearance. Since it is assumed that knowledge is pertinent for early screening and prevention of strabismus, the main objective of this study was to assess knowledge and attitudes toward strabismus in Woreta town, Northwest Ethiopia. Providing data in this area is important for planning health policies. Methods: A community-based cross-sectional study was done in Woreta town from April–May 2020. The sample size was determined using a single population proportion formula by taking a 50% proportion of good knowledge, 95% confidence level, 5% margin of errors, and 10% non- response rate. Accordingly, the final computed sample size was 424. All four kebeles were included in the study. There were 42,595 people in total, with 39,684 adults and 9229 house holds. A sample fraction ’’k’’ was obtained by dividing the number of the household by the calculated sample size of 424. Systematic random sampling with proportional allocation was used to select the participating households with a sampling fraction (K) of 21 i.e. each household was approached in every 21 households included in the study. One individual was selected ran- domly from each household with more than one adult, using the lottery method to obtain a final sample size. The data was collected through a face-to-face interview with a pretested and semi-structured questionnaire which was translated from English to Amharic and back to English to maintain its consistency. Data were entered using epi-data version 3.1, then processed and analyzed via SPSS version- 20. Descriptive and analytical statistics were employed to summarize the data. A p-value of less than 0.05 was used to declare statistical significance. Result: A total of 401 individuals aged over 18 years participated, with a response rate of 94.5%. Of those who responded, 56.6% were males. Of all the participants, 36.9% were illiterate. The proportion of people with poor knowledge of strabismus was 45.1%. It was shown that 53.9% of the respondents had a favorable attitude. Older age, higher educational level, having a history of eye examination, and a having a family history of strabismus were significantly associated with good knowledge of strabismus. A higher educational level, older age, and hearing about strabismus were significantly associated with a favorable attitude toward strabismus. Conclusion and recommendation: The proportion of good knowledge and favorable attitude towards strabismus were lower than previously reported in Gondar City, Northwest Ethiopia. There is a need to provide health education and promotion campaigns on strabismus to the community: what strabismus is, its’ possible treatments and the need to bring children to the eye care center for early diagnosis and treatment. it advocate for prospective research endeavors to employ qualitative study design.Additionally, it suggest the exploration of studies that investigate causal-effect relationship.Keywords: strabismus, knowledge, attitude, Woreta
Procedia PDF Downloads 6222 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output
Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera
Abstract:
Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas
Procedia PDF Downloads 16321 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development
Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas
Abstract:
One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development
Procedia PDF Downloads 31620 Wealth-Based Inequalities in Child Health: A Micro-Level Analysis of Maharashtra State in India
Abstract:
The study examines the degree and magnitude of wealth-based inequalities in child health and its determinants in India. Despite making strides in economic growth, India has failed to secure a better nutritional status for all the children. The country currently faces the double burden of malnutrition as well as the problems of overweight and obesity. Child malnutrition, obesity, unsafe water, sanitation among others are identified as the risk factors for Non-Communicable Diseases (NCDs). Eliminating malnutrition in all its forms will catalyse improved health and economic outcomes. The assessment of the distributive dimension of child health across various segments of the population is essential for effective policy intervention. The study utilises the fourth round of District Level Health Survey for 2012-13 to analyse the inequalities among children in the age group 0-14 years in Maharashtra, a state in the western region of India with a population of 11.24 crores which constitutes 9.3 percent of the total population of India. The study considers the extent of health inequality by state, districts, sector, age-groups, and gender. The z-scores of four child health outcome variables are computed to assess the nutritional status of pre-school and school children using WHO reference. The descriptive statistics, concentration curves, concentration indices, correlation matrix, logistic regression have been used to analyse the data. The results indicate that magnitude of inequality is higher in Maharashtra and child health inequalities manifest primarily among the weaker sections of society. The concentration curves show that there exists a pro-poor inequality in child malnutrition measured by stunting, wasting, underweight, anaemia and a pro-rich overweight inequality. The inequalities in anaemia are observably lower due to the widespread prevalence. Rural areas exhibit a higher incidence of malnutrition, but greater inequality is observed in the urban areas. Overall, the wealth-based inequalities do not vary significantly between age groups. It appears that there is no gender discrimination at the state level. Further, rural-urban differentials in gender show that boys from the rural area and girls living in the urban region experience higher disparities in health. The relative distribution of undernutrition across districts in Maharashtra reveals that malnutrition is rampant and considerable heterogeneity also exists. A negative correlation is established between malnutrition prevalence and human development indicators. The findings of logistic regression analysis reveal that lower economic status of the household is associated with a higher probability of being malnourished. The study recognises household wealth, education of the parent, child gender, and household size as factors significantly related to malnutrition. The results suggest that among the supply-side variables, child-oriented government programmes might be beneficial in tackling nutrition deficit. In order to bridge the health inequality gap, the government needs to target the schemes better and should expand the coverage of services.Keywords: child health, inequality, malnutrition, obesity
Procedia PDF Downloads 14619 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 11918 Climate Indices: A Key Element for Climate Change Adaptation and Ecosystem Forecasting - A Case Study for Alberta, Canada
Authors: Stefan W. Kienzle
Abstract:
The increasing number of occurrences of extreme weather and climate events have significant impacts on society and are the cause of continued and increasing loss of human and animal lives, loss or damage to property (houses, cars), and associated stresses to the public in coping with a changing climate. A climate index breaks down daily climate time series into meaningful derivatives, such as the annual number of frost days. Climate indices allow for the spatially consistent analysis of a wide range of climate-dependent variables, which enables the quantification and mapping of historical and future climate change across regions. As trends of phenomena such as the length of the growing season change differently in different hydro-climatological regions, mapping needs to be carried out at a high spatial resolution, such as the 10km by 10km Canadian Climate Grid, which has interpolated daily values from 1950 to 2017 for minimum and maximum temperature and precipitation. Climate indices form the basis for the analysis and comparison of means, extremes, trends, the quantification of changes, and their respective confidence levels. A total of 39 temperature indices and 16 precipitation indices were computed for the period 1951 to 2017 for the Province of Alberta. Temperature indices include the annual number of days with temperatures above or below certain threshold temperatures (0, +-10, +-20, +25, +30ºC), frost days, and timing of frost days, freeze-thaw days, growing or degree days, and energy demands for air conditioning and heating. Precipitation indices include daily and accumulated 3- and 5-day extremes, days with precipitation, period of days without precipitation, and snow and potential evapotranspiration. The rank-based nonparametric Mann-Kendall statistical test was used to determine the existence and significant levels of all associated trends. The slope of the trends was determined using the non-parametric Sen’s slope test. The Google mapping interface was developed to create the website albertaclimaterecords.com, from which beach of the 55 climate indices can be queried for any of the 6833 grid cells that make up Alberta. In addition to the climate indices, climate normals were calculated and mapped for four historical 30-year periods and one future period (1951-1980, 1961-1990, 1971-2000, 1981-2017, 2041-2070). While winters have warmed since the 1950s by between 4 - 5°C in the South and 6 - 7°C in the North, summers are showing the weakest warming during the same period, ranging from about 0.5 - 1.5°C. New agricultural opportunities exist in central regions where the number of heat units and growing degree days are increasing, and the number of frost days is decreasing. While the number of days below -20ºC has about halved across Alberta, the growing season has expanded by between two and five weeks since the 1950s. Interestingly, both the number of days with heat waves and cold spells have doubled to four-folded during the same period. This research demonstrates the enormous potential of using climate indices at the best regional spatial resolution possible to enable society to understand historical and future climate changes of their region.Keywords: climate change, climate indices, habitat risk, regional, mapping, extremes
Procedia PDF Downloads 9217 Numerical Simulation of the Production of Ceramic Pigments Using Microwave Radiation: An Energy Efficiency Study Towards the Decarbonization of the Pigment Sector
Authors: Pedro A. V. Ramos, Duarte M. S. Albuquerque, José C. F. Pereira
Abstract:
Global warming mitigation is one of the main challenges of this century, having the net balance of greenhouse gas (GHG) emissions to be null or negative in 2050. Industry electrification is one of the main paths to achieving carbon neutrality within the goals of the Paris Agreement. Microwave heating is becoming a popular industrial heating mechanism due to the absence of direct GHG emissions, but also the rapid, volumetric, and efficient heating. In the present study, a mathematical model is used to simulate the production using microwave heating of two ceramic pigments, at high temperatures (above 1200 Celsius degrees). The two pigments studied were the yellow (Pr, Zr)SiO₂ and the brown (Ti, Sb, Cr)O₂. The chemical conversion of reactants into products was included in the model by using the kinetic triplet obtained with the model-fitting method and experimental data present in the Literature. The coupling between the electromagnetic, thermal, and chemical interfaces was also included. The simulations were computed in COMSOL Multiphysics. The geometry includes a moving plunger to allow for the cavity impedance matching and thus maximize the electromagnetic efficiency. To accomplish this goal, a MATLAB controller was developed to automatically search the position of the moving plunger that guarantees the maximum efficiency. The power is automatically and permanently adjusted during the transient simulation to impose stationary regime and total conversion, the two requisites of every converged solution. Both 2D and 3D geometries were used and a parametric study regarding the axial bed velocity and the heat transfer coefficient at the boundaries was performed. Moreover, a Verification and Validation study was carried out by comparing the conversion profiles obtained numerically with the experimental data available in the Literature; the numerical uncertainty was also estimated to attest to the result's reliability. The results show that the model-fitting method employed in this work is a suitable tool to predict the chemical conversion of reactants into the pigment, showing excellent agreement between the numerical results and the experimental data. Moreover, it was demonstrated that higher velocities lead to higher thermal efficiencies and thus lower energy consumption during the process. This work concludes that the electromagnetic heating of materials having high loss tangent and low thermal conductivity, like ceramic materials, maybe a challenge due to the presence of hot spots, which may jeopardize the product quality or even the experimental apparatus. The MATLAB controller increased the electromagnetic efficiency by 25% and global efficiency of 54% was obtained for the titanate brown pigment. This work shows that electromagnetic heating will be a key technology in the decarbonization of the ceramic sector as reductions up to 98% in the specific GHG emissions were obtained when compared to the conventional process. Furthermore, numerical simulations appear as a suitable technique to be used in the design and optimization of microwave applicators, showing high agreement with experimental data.Keywords: automatic impedance matching, ceramic pigments, efficiency maximization, high-temperature microwave heating, input power control, numerical simulation
Procedia PDF Downloads 13816 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 6315 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 28114 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 9313 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 17712 MANIFEST-2, a Global, Phase 3, Randomized, Double-Blind, Active-Control Study of Pelabresib (CPI-0610) and Ruxolitinib vs. Placebo and Ruxolitinib in JAK Inhibitor-Naïve Myelofibrosis Patients
Authors: Claire Harrison, Raajit K. Rampal, Vikas Gupta, Srdan Verstovsek, Moshe Talpaz, Jean-Jacques Kiladjian, Ruben Mesa, Andrew Kuykendall, Alessandro Vannucchi, Francesca Palandri, Sebastian Grosicki, Timothy Devos, Eric Jourdan, Marielle J. Wondergem, Haifa Kathrin Al-Ali, Veronika Buxhofer-Ausch, Alberto Alvarez-Larrán, Sanjay Akhani, Rafael Muñoz-Carerras, Yury Sheykin, Gozde Colak, Morgan Harris, John Mascarenhas
Abstract:
Myelofibrosis (MF) is characterized by bone marrow fibrosis, anemia, splenomegaly and constitutional symptoms. Progressive bone marrow fibrosis results from aberrant megakaryopoeisis and expression of proinflammatory cytokines, both of which are heavily influenced by bromodomain and extraterminal domain (BET)-mediated gene regulation and lead to myeloproliferation and cytopenias. Pelabresib (CPI-0610) is an oral small-molecule investigational inhibitor of BET protein bromodomains currently being developed for the treatment of patients with MF. It is designed to downregulate BET target genes and modify nuclear factor kappa B (NF-κB) signaling. MANIFEST-2 was initiated based on data from Arm 3 of the ongoing Phase 2 MANIFEST study (NCT02158858), which is evaluating the combination of pelabresib and ruxolitinib in Janus kinase inhibitor (JAKi) treatment-naïve patients with MF. Primary endpoint analyses showed splenic and symptom responses in 68% and 56% of 84 enrolled patients, respectively. MANIFEST-2 (NCT04603495) is a global, Phase 3, randomized, double-blind, active-control study of pelabresib and ruxolitinib versus placebo and ruxolitinib in JAKi treatment-naïve patients with primary MF, post-polycythemia vera MF or post-essential thrombocythemia MF. The aim of this study is to evaluate the efficacy and safety of pelabresib in combination with ruxolitinib. Here we report updates from a recent protocol amendment. The MANIFEST-2 study schema is shown in Figure 1. Key eligibility criteria include a Dynamic International Prognostic Scoring System (DIPSS) score of Intermediate-1 or higher, platelet count ≥100 × 10^9/L, spleen volume ≥450 cc by computerized tomography or magnetic resonance imaging, ≥2 symptoms with an average score ≥3 or a Total Symptom Score (TSS) of ≥10 using the Myelofibrosis Symptom Assessment Form v4.0, peripheral blast count <5% and Eastern Cooperative Oncology Group performance status ≤2. Patient randomization will be stratified by DIPSS risk category (Intermediate-1 vs Intermediate-2 vs High), platelet count (>200 × 10^9/L vs 100–200 × 10^9/L) and spleen volume (≥1800 cm^3 vs <1800 cm^3). Double-blind treatment (pelabresib or matching placebo) will be administered once daily for 14 consecutive days, followed by a 7 day break, which is considered one cycle of treatment. Ruxolitinib will be administered twice daily for all 21 days of the cycle. The primary endpoint is SVR35 response (≥35% reduction in spleen volume from baseline) at Week 24, and the key secondary endpoint is TSS50 response (≥50% reduction in TSS from baseline) at Week 24. Other secondary endpoints include safety, pharmacokinetics, changes in bone marrow fibrosis, duration of SVR35 response, duration of TSS50 response, progression-free survival, overall survival, conversion from transfusion dependence to independence and rate of red blood cell transfusion for the first 24 weeks. Study recruitment is ongoing; 400 patients (200 per arm) from North America, Europe, Asia and Australia will be enrolled. The study opened for enrollment in November 2020. MANIFEST-2 was initiated based on data from the ongoing Phase 2 MANIFEST study with the aim of assessing the efficacy and safety of pelabresib and ruxolitinib in JAKi treatment-naïve patients with MF. MANIFEST-2 is currently open for enrollment.Keywords: CPI-0610, JAKi treatment-naïve, MANIFEST-2, myelofibrosis, pelabresib
Procedia PDF Downloads 20111 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 8510 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 1329 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 2758 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population
Authors: Ye Xue, Zhenhua Deng
Abstract:
Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool
Procedia PDF Downloads 587 Menstrual Hygiene Practices Among the Women Age 15-24 in India
Authors: Priyanka Kumari
Abstract:
Menstrual hygiene is an important aspect in the life of young girls. Menstrual Hygiene Management (MHM) is defined as ‘Women and adolescent girls using a clean material to absorb or collect menstrual blood that can be changed in privacy as often as necessary for the duration of the menstruation period, using soap and water for washing the body as required and having access to facilities to dispose of used menstrual management materials. This paper aims to investigate the prevalence of hygienic menstrual practices and socio-demographic correlates of hygienic menstrual practices among women aged 15-24 in India. Data from the 2015–2016 National Family Health Survey–4 for 244,500 menstruating women aged 15–24 were used. The methods have been categorized into two, women who use sanitary napkins, locally prepared napkins and tampons considered as a hygienic method and those who use cloth, any other method and nothing used at all during menstruation considered as an unhygienic method. Women’s age, year of schooling, religion, place of residence, caste/tribe, marital status, wealth index, type of toilet facility used, region, the structure of the house and exposure to mass media are taken as an independent variables. Bivariate analysis was carried out with selected background characteristics to analyze the socio-economic and demographic factors associated with the use of hygienic methods during menstruation. The odds for the use of the hygienic method were computed by employing binary logistic regression. Almost 60% of the women use cloth as an absorbent during menstruation to prevent blood stains from becoming evident. The hygienic method, which includes the use of locally prepared napkins, sanitary napkins and tampons, is 16.27%, 41.8% and 2.4%. The proportion of women who used hygienic methods to prevent blood stains from becoming evident was 57.58%. Multivariate analyses reveal that education of women, wealth and marital status are found to be the most important positive factors of hygienic menstrual practices. The structure of the house and exposure to mass media also have a positive impact on the use of menstrual hygiene practices. In contrast, women residing in rural areas belonging to scheduled tribes are less likely to use hygienic methods during their menstruation. Geographical regions are also statistically significant with the use of hygienic methods during menstruation. This study reveals that menstrual hygiene is not satisfactory among a large proportion of adolescent girls. They need more education about menstrual hygiene. A variety of factors affect menstrual behaviors; amongst these, the most influential is economic status, educational status and residential status, whether urban or rural. It is essential to design a mechanism to address and access healthy menstrual knowledge. It is important to encourage policies and quality standards that promote safe and affordable options and dynamic markets for menstrual products. Materials that are culturally acceptable, contextually available and affordable. Promotion of sustainable, environmentally friendly menstrual products and their disposal as it is a very important aspect of sustainable development goals. We also need to educate the girls about the services which are provided by the government, like a free supply of sanitary napkins to overcome reproductive tract infections. Awareness regarding the need for information on healthy menstrual practices is very important. It is essential to design a mechanism to address and access healthy menstrual practices. Emphasis should be given to the education of young girls about the importance of maintaining hygiene during menstruation to prevent the risk of reproductive tract infections.Keywords: adolescent, menstruation, menstrual hygiene management, menstrual hygiene
Procedia PDF Downloads 1396 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 715 BIM Modeling of Site and Existing Buildings: Case Study of ESTP Paris Campus
Authors: Rita Sassine, Yassine Hassani, Mohamad Al Omari, Stéphanie Guibert
Abstract:
Building Information Modelling (BIM) is the process of creating, managing, and centralizing information during the building lifecycle. BIM can be used all over a construction project, from the initiation phase to the planning and execution phases to the maintenance and lifecycle management phase. For existing buildings, BIM can be used for specific applications such as lifecycle management. However, most of the existing buildings don’t have a BIM model. Creating a compatible BIM for existing buildings is very challenging. It requires special equipment for data capturing and efforts to convert these data into a BIM model. The main difficulties for such projects are to define the data needed, the level of development (LOD), and the methodology to be adopted. In addition to managing information for an existing building, studying the impact of the built environment is a challenging topic. So, integrating the existing terrain that surrounds buildings into the digital model is essential to be able to make several simulations as flood simulation, energy simulation, etc. Making a replication of the physical model and updating its information in real-time to make its Digital Twin (DT) is very important. The Digital Terrain Model (DTM) represents the ground surface of the terrain by a set of discrete points with unique height values over 2D points based on reference surface (e.g., mean sea level, geoid, and ellipsoid). In addition, information related to the type of pavement materials, types of vegetation and heights and damaged surfaces can be integrated. Our aim in this study is to define the methodology to be used in order to provide a 3D BIM model for the site and the existing building based on the case study of “Ecole Spéciale des Travaux Publiques (ESTP Paris)” school of engineering campus. The property is located on a hilly site of 5 hectares and is composed of more than 20 buildings with a total area of 32 000 square meters and a height between 50 and 68 meters. In this work, the campus precise levelling grid according to the NGF-IGN69 altimetric system and the grid control points are computed according to (Réseau Gédésique Français) RGF93 – Lambert 93 french system with different methods: (i) Land topographic surveying methods using robotic total station, (ii) GNSS (Global Network Satellite sytem) levelling grid with NRTK (Network Real Time Kinematic) mode, (iii) Point clouds generated by laser scanning. These technologies allow the computation of multiple building parameters such as boundary limits, the number of floors, the floors georeferencing, the georeferencing of the 4 base corners of each building, etc. Once the entry data are identified, the digital model of each building is done. The DTM is also modeled. The process of altimetric determination is complex and requires efforts in order to collect and analyze multiple data formats. Since many technologies can be used to produce digital models, different file formats such as DraWinG (DWG), LASer (LAS), Comma-separated values (CSV), Industry Foundation Classes (IFC) and ReViT (RVT) will be generated. Checking the interoperability between BIM models is very important. In this work, all models are linked together and shared on 3DEXPERIENCE collaborative platform.Keywords: building information modeling, digital terrain model, existing buildings, interoperability
Procedia PDF Downloads 1124 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces
Authors: Martin Alexander Eder, Sergei Semenov
Abstract:
Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.Keywords: adhesive, fatigue, interface, multiaxial stress
Procedia PDF Downloads 169