Search results for: automatic classification
392 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses
Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh
Abstract:
Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications
Procedia PDF Downloads 317391 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach
Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic
Abstract:
The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning
Procedia PDF Downloads 185390 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 425389 Real-World Economic Burden of Musculoskeletal Disorders in Nigeria
Authors: F. Fatoye, C. E. Mbada, T. Gebrye, A. O. Ogunsola, C. Fatoye, O. Oyewole
Abstract:
Musculoskeletal disorders (MSDs) such as low back pain (LBP), cervical spondylosis (CSPD), sprain, osteoarthritis (OA), and post immobilization stiffness (PIS) have a major impact on individuals, health systems and society in terms of morbidity, long-term disability, and economics. This study estimated the direct and indirect costs of common MSDs in Osun State, Nigeria. A review of medical charts for adult patients attending Physiotherapy Outpatient Clinic at the Obafemi Awolowo University Teaching Hospitals Complex, Osun State, Nigeria between 2009 and 2018 was carried out. The occupational class of the patients was determined using the International Labour Classification (ILO). The direct and indirect costs were estimated using a cost-of-illness approach. Physiotherapy related health resource use, and costs of the common MSDs, including consultation fee, total fee charge per session, costs of consumables were estimated. Data were summarised using descriptive statistics mean and standard deviation (SD). Overall, 1582 (Male = 47.5%, Female = 52.5%) patients with MSDs population with a mean age of 47.8 ± 25.7 years participated in this study. The mean (SD) direct costs estimate for LBP, CSPD, PIS, sprain, OA, and other conditions were $18.35 ($17.33), $34.76 ($17.33), $32.13 ($28.37), $35.14 ($44.16), $37.19 ($41.68), and $15.74 ($13.96), respectively. The mean (SD) indirect costs estimate of LBP, CSPD, PIS, sprain, OA, and other MSD conditions were $73.42 ($43.54), $140.57 ($69.31), $128.52 ($113.46), sprain $140.57 ($69.31), $148.77 ($166.71), and $62.98 ($55.84), respectively. Musculoskeletal disorders contribute a substantial economic burden to individuals with the condition and society. The unacceptable economic loss of MSDs should be reduced using appropriate strategies. Further research is required to determine the clinical and cost effectiveness of strategies to improve health outcomes of patients with MSDs. The findings of the present study may assist health policy and decision makers to understand the economic burden of MSDs and facilitate efficient allocation of healthcare resources to alleviate the burden associated with these conditions in Nigeria.Keywords: economic burden, low back pain, musculoskeletal disorders, real-world
Procedia PDF Downloads 221388 Scheduling Building Projects: The Chronographical Modeling Concept
Authors: Adel Francis
Abstract:
Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling
Procedia PDF Downloads 155387 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 329386 Prevalence of Elder Abuse and Effects of Social Factors on It
Authors: Ezat Vahidian, Babak Eshrati
Abstract:
Introduction: Elder abuse, a very complex issue with diverse definitions and names, has been very slow to capture the public eye and public policy since it is manifested at many levels. It requires the involvement of different types of professionals. While elder abuse is not a new phenomenon, the speed of population ageing world-wide is likely to lead to an increase in its incidence and prevalence. Elder abuse has devastating consequences for older persons such as poor quality of life, psychological distress, and loss of property and security. It is also associated with increased mortality and morbidity. Elder abuse is a problem that manifests itself in both rich and poor countries and at all levels of society. Purpose: The purpose of this study is to determine the prevalence of elder abuse and effects of social factor on it in Markazi Province. Materials and methods: The society of the study was all of the elders in Markazi Province that were available by geographical address in the table of rural and urban household societies. The study was cross sectional and multi phases in sampling the first one was classification according rural and urban area and the second one was cluster sampling with equal cluster. Estimation of samples were 472 persons and increased by design effect to 1110 persons. Collection data was done by questionnaire and analyzed by SPSS and chi 2 exam. Results: This study showed 70 persons were abused (42/8% male and 57/2% female) mean of ages was 74/7 years. 64% were marred and 31% were widows. There were not any significant meaningful association between elder abuse and area of living (pv=0.299),occupation (p.v=0.104), education (pv=0.358) and age (P.value=0.104) there were significant meaningful association between physical impairment (pv=0.08), and movement impairment (P.value=0.008). Conclusion: Results verify that maltreatment occurred in the aged persons. Analysis of data indicated that elder abuse exist in every socioeconomic group with any context of education in urban area and rural area and in men and women. Prevalence of elder abuse was 6.3% (70 persons) that verify the data of developed countries with limited sample.Keywords: elder abuse, education, occupation, area of living
Procedia PDF Downloads 403385 Application of Hyperspectral Remote Sensing in Sambhar Salt Lake, A Ramsar Site of Rajasthan, India
Authors: Rajashree Naik, Laxmi Kant Sharma
Abstract:
Sambhar lake is the largest inland Salt Lake of India, declared as a Ramsar site on 23 March 1990. Due to high salinity and alkalinity condition its biodiversity richness is contributed by haloalkaliphilic flora and fauna along with the diverse land cover including waterbody, wetland, salt crust, saline soil, vegetation, scrub land and barren land which welcome large number of flamingos and other migratory birds for winter harboring. But with the gradual increase in the irrational salt extraction activities, the ecological diversity is at stake. There is an urgent need to assess the ecosystem. Advanced technology like remote sensing and GIS has enabled to look into the past, compare with the present for the future planning and management of the natural resources in a judicious way. This paper is a research work intended to present a vegetation in typical inland lake environment of Sambhar wetland using satellite data of NASA’s EO-1 Hyperion sensor launched in November 2000. With the spectral range of 0.4 to 2.5 micrometer at approximately 10nm spectral resolution with 242 bands 30m spatial resolution and 705km orbit was used to produce a vegetation map for a portion of the wetland. The vegetation map was tested for classification accuracy with a pre-existing detailed GIS wetland vegetation database. Though the accuracy varied greatly for different classes the algal communities were successfully identified which are the major sources of food for flamingo. The results from this study have practical implications for uses of spaceborne hyperspectral image data that are now becoming available. Practical limitations of using these satellite data for wetland vegetation mapping include inadequate spatial resolution, complexity of image processing procedures, and lack of stereo viewing.Keywords: Algal community, NASA’s EO-1 Hyperion, salt-tolerant species, wetland vegetation mapping
Procedia PDF Downloads 135384 Gut Mycobiome Dysbiosis and Its Impact on Intestinal Permeability in Attention-Deficit/Hyperactivity Disorder
Authors: Liang-Jen Wang, Sung-Chou Li, Yuan-Ming Yeh, Sheng-Yu Lee, Ho-Chang Kuo, Chia-Yu Yang
Abstract:
Background: Dysbiosis in the gut microbial community might be involved in the pathophysiology of attention deficit/hyperactivity disorder (ADHD). The fungal component of the gut microbiome, namely the mycobiota, is a hyperdiverse group of multicellular eukaryotes that can influence host intestinal permeability. This study therefore aimed to investigate the impact of fungal mycobiome dysbiosis and intestinal permeability on ADHD. Methods: Faecal samples were collected from 35 children with ADHD and from 35 healthy controls. Total DNA was extracted from the faecal samples, and the internal transcribed spacer (ITS) regions were sequenced using high-throughput next-generation sequencing (NGS). The fungal taxonomic classification was analysed using bioinformatics tools, and the differentially expressed fungal species between the ADHD and healthy control groups were identified. An in vitro permeability assay (Caco-2 cell layer) was used to evaluate the biological effects of fungal dysbiosis on intestinal epithelial barrier function. Results: The β-diversity (the species diversity between two communities), but not α-diversity (the species diversity within a community), reflected the differences in fungal community composition between ADHD and control groups. At the phylum level, the ADHD group displayed a significantly higher abundance of Ascomycota and significantly lower abundance of Basidiomycota than the healthy control group. At the genus level, the abundance of Candida (especially Candida albicans) was significantly increased in ADHD patients compared to the healthy controls. In addition, the in vitro cell assay revealed that C. albicans secretions significantly enhanced the permeability of Caco-2 cells. Conclusions: The current study is the first to explore altered gut mycobiome dysbiosis using the NGS platform in ADHD. The findings from this study indicated that dysbiosis of the fungal mycobiome and intestinal permeability might be associated with susceptibility to ADHD.Keywords: ADHD, fungus, gut–brain axis, biomarker, child psychiatry
Procedia PDF Downloads 113383 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 130382 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data
Authors: Gayathri Nagarajan, L. D. Dhinesh Babu
Abstract:
Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform
Procedia PDF Downloads 240381 Association of the Time in Targeted Blood Glucose Range of 3.9–10 Mmol/L with the Mortality of Critically Ill Patients with or without Diabetes
Authors: Guo Yu, Haoming Ma, Peiru Zhou
Abstract:
BACKGROUND: In addition to hyperglycemia, hypoglycemia, and glycemic variability, a decrease in the time in the targeted blood glucose range (TIR) may be associated with an increased risk of death for critically ill patients. However, the relationship between the TIR and mortality may be influenced by the presence of diabetes and glycemic variability. METHODS: A total of 998 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The TIR is defined as the percentage of time spent in the target blood glucose range of 3.9–10.0 mmol/L within 24 hours. The relationship between TIR and in-hospital in diabetic and non-diabetic patients was analyzed. The effect of glycemic variability was also analyzed. RESULTS: The binary logistic regression model showed that there was a significant association between the TIR as a continuous variable and the in-hospital death of severely ill non-diabetic patients (OR=0.991, P=0.015). As a classification variable, TIR≥70% was significantly associated with in-hospital death (OR=0.581, P=0.003). Specifically, TIR≥70% was a protective factor for the in-hospital death of severely ill non-diabetic patients. The TIR of severely ill diabetic patients was not significantly associated with in-hospital death; however, glycemic variability was significantly and independently associated with in-hospital death (OR=1.042, P=0.027). Binary logistic regression analysis of comprehensive indices showed that for non-diabetic patients, the C3 index (low TIR & high CV) was a risk factor for increased mortality (OR=1.642, P<0.001). In addition, for diabetic patients, the C3 index was an independent risk factor for death (OR=1.994, P=0.008), and the C4 index (low TIR & low CV) was independently associated with increased survival. CONCLUSIONS: The TIR of non-diabetic patients during ICU hospitalization was associated with in-hospital death even after adjusting for disease severity and glycemic variability. There was no significant association between the TIR and mortality of diabetic patients. However, for both diabetic and non-diabetic critically ill patients, the combined effect of high TIR and low CV was significantly associated with ICU mortality. Diabetic patients seem to have higher blood glucose fluctuations and can tolerate a large TIR range. Both diabetic and non-diabetic critically ill patients should maintain blood glucose levels within the target range to reduce mortality.Keywords: severe disease, diabetes, blood glucose control, time in targeted blood glucose range, glycemic variability, mortality
Procedia PDF Downloads 222380 Robson System Analysis in Kyiv Perinatal Centre
Authors: Victoria Bila, Iryna Ventskivska, Oleksandra Zahorodnia
Abstract:
The goal of the study: To study the distribution of patients of the Kiyv Perinatal Center according to the Robson system and compare it with world data. Materials and methods: a comparison of the distribution of patients of Kiyv Perinatal center according to the Robson system for 2 periods - the first quarter of 2019 and 2020. For each group, 3 indicators were analyzed - the share of this group in the overall structure of patients of the Perinatal Center for the reporting period, the frequency of abdominal delivery in this group, as well as the contribution of this group to the total number of abdominal delivery. Obtained data were compared with those of the WHO in the guidelines for the implementation of the Robson system in 2017. Results and its discussion: The distribution of patients of the Perinatal Center into groups in the Robson classification is not much different from that recommended by the author. So, among all women, patients of group 1 dominate; this indicator does not change in dynamics. A slight increase in the share of group 2 (6.7% in 2019 and 9.3% - 2020) was due to an increase in the number of labor induction. At the same time, the number of patients of groups 1 and 2 in the Perinatal Center is greater than in the world population, which is determined by the hospitalization of primiparous women with reproductive losses in the past. The Perinatal Center is distinguished from the world population and the proportion of women of group 5 - it was 5.4%, in the world - 7.6%. The frequency of caesarean section in the Perinatal Center is within limits typical for most countries (20.5-20.8%). Moreover, the dominant groups in the structure of caesarean sections are group 5 (21-23.3%) and group 2 (21.9-22.9%), which are the reserve for reducing the number of abdominal delivery. In group 2, certain results have already been achieved in this matter - the frequency of cesarean section in 2019 here amounted to 67.8%, in the first quarter of 2020 - 51.6%. This happened due to a change in the leading method of induction of labor. Thus, the Robson system is a convenient and affordable tool for assessing the structure of caesarean sections. The analysis showed that, in general, the structure of caesarean sections in the Perinatal Center is close to world data, and the identified deviations have explanations related to the specialization of the Center.Keywords: cesarian section, Robson system, Kyiv Perinatal Center, labor induction
Procedia PDF Downloads 137379 Combined Effect of Therapeutic Exercises and Shock Wave versus Therapeutic Exercises and Phonophoresis in Treatment of Shoulder Impingement Syndrome: A Randomized Controlled Trial
Authors: Mohamed M. Mashaly, Ahmed M. F. El Shiwi
Abstract:
Background: Shoulder impingement syndrome is an encroachment of subacromial tissues, rotator cuff, subacromial bursa, and the long head of the biceps tendon, as a result of narrowing of the subacromial space. Activities requiring repetitive or sustained use of the arms over head often predispose the rotator cuff tendon to injury. Purpose: To compare between Combined effect therapeutic exercises and Shockwave therapy versus therapeutic exercises and phonophoresis in the treatment of shoulder impingement syndrome. Methods: Thirty patients diagnosed as shoulder impingement syndrome stage II Neer classification due to mechanical causes. Patients were randomly distributed into two equal groups. The first group consisted of 15 patients with a mean age of (45.46+8.64) received therapeutic exercises (stretching exercise of posterior shoulder capsule and strengthening exercises of shoulder muscles) and shockwave therapy (6000 shocks, 2000/session, 3 sessions, 2 weeks apart, 0.22mJ/mm^2) years. The second group consisted of 15 patients with a mean age of 46.26 (+ 8.05) received same therapeutic exercises and phonophoresis (3 times per week, each other day, for 4 consecutive weeks). Patients were evaluated pretreatment and post treatment for shoulder pain severity, shoulder functional disability, shoulder flexion, abduction and internal rotation motions. Results: Patients of both groups showed significant improvement in all the measured variables. In between groups difference the shock wave group showed a significant improvement in all measured variables than phonophoresis group. Interpretation/Conclusion: Combined effect of therapeutic exercises and shock wave were more effective than therapeutic exercises and phonophoresis on decreasing shoulder pain severity, shoulder functional disability, increasing in shoulder flexion, abduction, internal rotation in patients with shoulder impingement syndrome.Keywords: shoulder impingement syndrome, therapeutic exercises, shockwave, phonophoresis
Procedia PDF Downloads 472378 Cataloguing Beetle Fauna (Insecta: Coleoptera) of India: Estimating Diversity, Distribution, and Taxonomic Challenges
Authors: Devanshu Gupta, Kailash Chandra, Priyanka Das, Joyjit Ghosh
Abstract:
Beetles, in the insect order Coleoptera are the most species-rich group on this planet today. They represent about 40% of the total insect diversity of the world. With a considerable range of landform types including significant mountain ranges, deserts, fertile irrigational plains, and hilly forested areas, India is one of the mega-diverse countries and includes more than 0.1 million faunal species. Despite having rich biodiversity, the efforts to catalogue the beetle diversity of the extant species/taxa reported from India have been less. Therefore, in this paper, the information on the beetle fauna of India is provided based on the data available with the museum collections of Zoological Survey of India and taxa extracted from zoological records and published literature. The species were listed with their valid names, synonyms, type localities, type depositories, and their distribution in states and biogeographic zones of India. The catalogue also incorporates the bibliography on Indian Coleoptera. The exhaustive species inventory, prepared by us include distributional records from Himalaya, Trans Himalaya, Desert, Semi-Arid, Western Ghats, Deccan Peninsula, Gangetic Plains, Northeast, Islands, and Coastal areas of the country. Our study concludes that many of the species are still known from their type localities only, so there is need to revisit and resurvey those collection localities for the taxonomic evaluation of those species. There are species which exhibit single locality records, and taxa-specific biodiversity assessments are required to be undertaken to understand the distributional range of such species. The primary challenge is taxonomic identifications of the species which were described before independence, and the type materials are present in overseas museums. For such species, taxonomic revisions of the different group of beetles are required to solve the problems of identification and classification.Keywords: checklist, taxonomy, museum collections, biogeographic zones
Procedia PDF Downloads 274377 Changing Pattern of Drug Abuse: An Outpatient Department Based Study from India
Authors: Anshu Gupta, Charu Gupta
Abstract:
Background: Punjab, a border state in India has achieved notoriety world over for its drug abuse problem. People right from school kids to elderly are hooked to drugs. This pattern of substance abuse is prevalent in both cities and villages alike. Excess of younger population in India has further aggravated the situation. It is feared that the benefits of India’s economic growth may well be negated by the rising substance abuse especially in this part of the country. It is quite evident that the pattern of substance abuse tends to change over time which is an impediment in the formulation of effective strategies to tackle this issue. Aim: Purpose of the study was to ascertain the change in the pattern of drug abuse for two consecutive years in the out patient department (OPD) population. Method: The study population comprised of all the patients reporting for deaddiction to the psychiatry outpatient department over a period of twelve months for two consecutive years. All the patients were evaluated by the International Classification of Diseases; 10 criteria for substance abuse/dependence. Results: A considerably high prevalence of substance abuse was present in the Indian population. In general, there was an increase in prevalence from first to the second year, especially among the female population. Increase in prevalence of substance abuse appeared to be more prominent among the younger age group of both the sexes. A significant increase in intravenous drug abuse was observed. Peer pressure and parental imitation were the major factors fueling substance abuse. Precipitation or fear of withdrawal symptoms was the major factor preventing abstinence. Substance abuse had a significant effect on the health and interpersonal relations of these patients. Summary/Conclusion: Drug abuse and addiction are on the rise throughout India. Changing cultural values, increasing economic stress and dwindling supportive bonds appear to be leading to initiation of substance abuse. Need of the hour is to formulate a comprehensive strategy to bring about an overall reduction in the use of drugs.Keywords: deaddiction, peer pressure, parental imitation, substance abuse/dependance
Procedia PDF Downloads 204376 The Impact of the Fitness Center Ownership Structure on the Service Quality Perception in the Fitness in Serbia
Authors: Dragan Zivotic, Mirjana Ilic, Aleksandra Perovic, Predrag Gavrilovic
Abstract:
As with the provision of other services, the service quality perception is one of the key factors that the modern manager must pay attention to. Countries in which the state regulation is in transition also have specific features in providing fitness services. Identification of the dimensions in which the most significant different service quality perception between different types of fitness centers, enables managers to profile the offer according to the wishes and expectations of users. The aim of the paper was the comparison of the quality of services perception in the field of fitness in Serbia between three categories of fitness centers: the privately owned centers, the publicly owned centers, and the Public-private partnership centers. For this research 350 respondents of both genders (174 men and 176 women) were interviewed, aged between 18 and 68 years, being beneficiaries of fitness services for at least 1 year. Administered questionnaire with 100 items provided information about the 15 basic areas in which they expressed the service quality perception in the gym. The core sample was composed of 212 service users in private fitness centers, 69 service users in public fitness centers and 69 service users in the public-private partnership. Sub-samples were equal in representation of women and men, as well as by age and length of use of fitness services. The obtained results were subject of univariate analysis with the Kruskal-Wallis non-parametric analysis of variance. Significant differences between the analyzed sub-samples were not found solely in the areas of rapid response and quality outcomes. In the multivariate model, the results were processed by backward stepwise discriminant analysis that extracted 3 areas that maximize the differences between sub-samples: material and technical basis, secondary facilities and coaches. By applying the classification function 93.87% of private centers services users, 62.32% of public centers services users and 85.51% of the public-private partnership centers users of services were correctly classified (total 86.00%). These results allow optimizing the allocation of the necessary resources in profiling offers of a fitness center in order to optimally adjust it to the user’s needs and expectations.Keywords: fitness, quality perception, management, public ownership, private ownership, public-private partnership, discriminative analysis
Procedia PDF Downloads 293375 Selective Guest Accommodation in Zn(II) Bimetallic: Organic Coordination Frameworks
Authors: Bukunola K. Oguntade, Gareth M. Watkins
Abstract:
The synthesis and characterization of metal-organic frameworks (MOFs) is an area of coordination chemistry which has grown rapidly in recent years. Worldwide there has been growing concerns about future energy supplies, and its environmental impacts. A good number of MOFs have been tested for the adsorption of small molecules in the vapour phase. An important issue for potential applications of MOFs for gas adsorption and storage materials is the stability of their structure upon sorption. Therefore, study on the thermal stability of MOFs upon adsorption is important. The incorporation of two or more transition metals in a coordination polymer is a current challenge for designed synthesis. This work focused on the synthesis, characterization and small molecule adsorption properties of three microporous (one zinc monometal and two bimetallics) complexes involving Cu(II), Zn(II) and 1,2,4,5-benzenetetracarboxylic acid using the ambient precipitation and solvothermal method. The complexes were characterized by elemental analysis, Infrared spectroscopy, Scanning Electron microscopy, Thermogravimetry analysis and X-ray Powder diffraction. The N2-adsorption Isotherm showed the complexes to be of TYPE III in reference to IUPAC classification, with very small pores only capable for small molecule sorption. All the synthesized compounds were observed to contain water as guest. Investigations of their inclusion properties for small molecules in the vapour phase showed water and methanol as the only possible inclusion candidates with 10.25H2O in the monometal complex [Zn4(H2B4C)2.5(OH)3(H2O)]·10H2O but not reusable after a complete structural collapse. The ambient precipitation bimetallic; [(CuZnB4C(H2O)2]·5H2O, was found to be reusable and recoverable from structure collapse after adsorption of 5.75H2O. In addition, Solvo-[CuZnB4C(H2O)2.5]·2H2O obtained from solvothermal method show two cycles of rehydration with 1.75H2O and 0.75MeOH inclusion while structure remains unaltered upon dehydration and adsorption.Keywords: adsorption, characterization, copper, metal -organic frameworks, zinc
Procedia PDF Downloads 134374 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 358373 Current Deflecting Wall: A Promising Structure for Minimising Siltation in Semi-Enclosed Docks
Authors: A. A. Purohit, A. Basu, K. A. Chavan, M. D. Kudale
Abstract:
Many estuarine harbours in the world are facing the problem of siltation in docks, channel entrances, etc. The harbours in India are not an exception and require maintenance dredging to achieve navigable depths for keeping them operable. Hence, dredging is inevitable and is a costly affair. The heavy siltation in docks in well mixed tide dominated estuaries is mainly due to settlement of cohesive sediments in suspension. As such there is a need to have a permanent solution for minimising the siltation in such docks to alter the hydrodynamic flow field responsible for siltation by constructing structures outside the dock. One of such docks on the west coast of India, wherein siltation of about 2.5-3 m/annum prevails, was considered to understand the hydrodynamic flow field responsible for siltation. The dock is situated in such a region where macro type of semi-diurnal tide (range of about 5m) prevails. In order to change the flow field responsible for siltation inside the dock, suitability of Current Deflecting Wall (CDW) outside the dock was studied, which will minimise the sediment exchange rate and siltation in the dock. The well calibrated physical tidal model was used to understand the flow field during various phases of tide for the existing dock in Mumbai harbour. At the harbour entrance where the tidal flux exchanges in/out of the dock, measurements on water level and current were made to estimate the sediment transport capacity. The distorted scaled model (1:400 (H) & 1:80 (V)) of Mumbai area was used to study the tidal flow phenomenon, wherein tides are generated by automatic tide generator. Hydraulic model studies carried out under the existing condition (without CDW) reveal that, during initial hours of flood tide, flow hugs the docks breakwater and part of flow which enters the dock forms number of eddies of varying sizes inside the basin, while remaining part of flow bypasses the entrance of dock. During ebb, flow direction reverses, and part of the flow re-enters the dock from outside and creates eddies at its entrance. These eddies do not allow water/sediment-mass to come out and result in settlement of sediments in dock both due to eddies and more retention of sediment. At latter hours, current strength outside the dock entrance reduces and allows the water-mass of dock to come out. In order to improve flow field inside the dockyard, two CDWs of length 300 m and 40 m were proposed outside the dock breakwater and inline to Pier-wall at dock entrance. Model studies reveal that, during flood, major flow gets deflected away from the entrance and no eddies are formed inside the dock, while during ebb flow does not re-enter the dock, and sediment flux immediately starts emptying it during initial hours of ebb. This reduces not only the entry of sediment in dock by about 40% but also the deposition by about 42% due to less retention. Thus, CDW is a promising solution to significantly reduce siltation in dock.Keywords: current deflecting wall, eddies, hydraulic model, macro tide, siltation
Procedia PDF Downloads 298372 Dietary Micronutritient and Health among Youth in Algeria
Authors: Allioua Meryem
Abstract:
Similar to much of the developing world, Algeria is currently undergoing an epidemiological transition. While mal- and under-nutrition and infectious diseases used to be the main causes of poor health, today there is a higher proportion of chronic, non-communicable diseases (NCDs), including cardiovascular disease, diabetes mellitus, cancer, etc. According to estimates for Algeria from the World Health Organization (WHO), NCDs accounted for 63% of all deaths in 2010. The objective of this study was the assessment of eating habits and anthropometric characteristics in a group of youth aged 15 to 19 years in Tlemcen. This study was conducted on a total effective of 806 youth enrolled in a descriptive cross-sectional study; the classification of nutritional status has been established by international standards IOTF, youth were defined as obese if they had a BMI ≥ 95th percentile, and youth with 85th ≤ BMI ≤ 95th percentile were defined as overweight. Wc is classified by the criteria HD, Wc with moderate risk ≥ 90th percentile and Wc with high risk ≥ 95th percentile. The dietary assessment was based on a 24-hour dietary recall assisted by food records. USDA’S nutrient database for Nutrinux® program was used to analyze dietary intake. Nutrients adequacy ratio was calculated by dividing daily individual intake to dietary recommended intake DRI for each nutrient. 9% of the population was overweight, 3% was obese, 7.5% had abdominal obesity, foods eaten in moderation are chips, cookies, chocolate 1-3 times/day and increased consumption of fried foods in the week, almost half of youth consume sugary drinks more than 3 times per week, we observe a decreased intake of energy, protein (P < 0.001, P = 0.003), SFA (P = 0.018), the NAR of phosphorus, iron, magnesium, vitamin B6, vitamin E, folate, niacin, and thiamin reflecting less consumption of fruit, vegetables, milk, and milk products. Youth surveyed have eating habits at risk of developing obesity and chronic disease.Keywords: food intake, health, anthropometric characteristics, Algeria
Procedia PDF Downloads 540371 Field Emission Scanning Microscope Image Analysis for Porosity Characterization of Autoclaved Aerated Concrete
Authors: Venuka Kuruwita Arachchige Don, Mohamed Shaheen, Chris Goodier
Abstract:
Aerated autoclaved concrete (AAC) is known for its lightweight, easy handling, high thermal insulation, and extremely porous structure. Investigation of pore behavior in AAC is crucial for characterizing the material, standardizing design and production techniques, enhancing the mechanical, durability, and thermal performance, studying the effectiveness of protective measures, and analyzing the effects of weather conditions. The significant details of pores are complicated to observe with acknowledged accuracy. The High-resolution Field Emission Scanning Electron Microscope (FESEM) image analysis is a promising technique for investigating the pore behavior and density of AAC, which is adopted in this study. Mercury intrusion porosimeter and gas pycnometer were employed to characterize porosity distribution and density parameters. The analysis considered three different densities of AAC blocks and three layers in the altitude direction within each block. A set of understandings was presented to extract and analyze the details of pore shape, pore size, pore connectivity, and pore percentages from FESEM images of AAC. Average pore behavior outcomes per unit area were presented. Comparison of porosity distribution and density parameters revealed significant variations. FESEM imaging offered unparalleled insights into porosity behavior, surpassing the capabilities of other techniques. The analysis conducted from a multi-staged approach provides porosity percentage occupied by various pore categories, total porosity, variation of pore distribution compared to AAC densities and layers, number of two-dimensional and three-dimensional pores, variation of apparent and matrix densities concerning pore behaviors, variation of pore behavior with respect to aluminum content, and relationship among shape, diameter, connectivity, and percentage in each pore classification.Keywords: autoclaved aerated concrete, density, imaging technique, microstructure, porosity behavior
Procedia PDF Downloads 68370 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use
Procedia PDF Downloads 143369 Analysis of Aquifer Productivity in the Mbouda Area (West Cameroon)
Authors: Folong Tchoffo Marlyse Fabiola, Anaba Onana Achille Basile
Abstract:
Located in the western region of Cameroon, in the BAMBOUTOS department, the city of Mbouda belongs to the Pan-African basement. The water resources exploited in this region consist of surface water and groundwater from weathered and fractured aquifers within the same basement. To study the factors determining the productivity of aquifers in the Mbouda area, we adopted a methodology based on collecting data from boreholes drilled in the region, identifying different types of rocks, analyzing structures, and conducting geophysical surveys in the field. The results obtained allowed us to distinguish two main types of rocks: metamorphic rocks composed of amphibolites and migmatitic gneisses and igneous rocks, namely granodiorites and granites. Several types of structures were also observed, including planar structures (foliation and schistosity), folded structures (folds), and brittle structures (fractures and lineaments). A structural synthesis combines all these elements into three major phases of deformation. Phase D1 is characterized by foliation and schistosity, phase D2 is marked by shear planes and phase D3 is characterized by open and sealed fractures. The analysis of structures (fractures in outcrops, Landsat lineaments, subsurface structures) shows a predominance of ENE-WSW and WNW-ESE directions. Through electrical surveys and borehole data, we were able to identify the sequence of different geological formations. Four geo-electric layers were identified, each with a different electrical conductivity: conductive, semi-resistive, or resistive. The last conductive layer is considered a potentially aquiferous zone. The flow rates of the boreholes ranged from 2.6 to 12 m3/h, classified as moderate to high according to the CIEH classification. The boreholes were mainly located in basalts, which are mineralogically rich in ferromagnesian minerals. This mineral composition contributes to their high productivity as they are more likely to be weathered. The boreholes were positioned along linear structures or at their intersections.Keywords: Mbouda, Pan-African basement, productivity, west-Cameroon
Procedia PDF Downloads 62368 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101367 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany
Authors: Michael Mederle, Heinz Bernhardt
Abstract:
The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing
Procedia PDF Downloads 233366 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 223365 The Role of ICTS in Improving the Quality of Public Spaces in Large Cities of the Third World
Authors: Ayat Ayman Abdelaziz Ibrahim Amayem, Hassan Abdel-Salam, Zeyad El-Sayad
Abstract:
Nowadays, ICTs have spread extensively in everyday life in an unprecedented way. A great attention is paid to the ICTs while ignoring the social aspect. With the immersive invasion of internet as well as smart phones’ applications and digital social networking, people become more socially connected through virtual spaces instead of meeting in physical public spaces. Thus, this paper aims to find the ways of implementing ICTs in public spaces to regain their status as attractive places for people, incite meetings in real life and create sustainable lively city centers. One selected example of urban space in the city center of Alexandria is selected for the study. Alexandria represents a large metropolitan city subjected to rapid transformation. Improving the quality of its public spaces will have great effects on the whole well-being of the city. The major roles that ICTs can play in the public space are: culture and art, education, planning and design, games and entertainment, and information and communication. Based on this classification various examples and proposals of ICTs interventions in public spaces are presented and analyzed to encourage good old fashioned social interaction by creating the New Social Public Place of this Digital Era. The paper will adopt methods such as questionnaire for evaluating the people’s willingness to accept the idea of using ICTs in public spaces, their needs and their proposals for an attractive place; the technique of observation to understand the people behavior and their movement through the space and finally will present an experimental design proposal for the selected urban space. Accordingly, this study will help to find design principles that can be adopted in the design of future public spaces to meet the needs of the digital era’s users with the new concepts of social life respecting the rules of place-making.Keywords: Alexandria sustainable city center, digital place-making, ICTs, social interaction, social networking, urban places
Procedia PDF Downloads 420364 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27363 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89