Search results for: combined biomarker
819 Sounds of Power: An Ethnoorganological Approach to Understanding Colonial Music Culture in the Peruvian Andes
Authors: Natascha Reich
Abstract:
In colonial Peru, the Spanish crown relied on religious orders, most notably Dominicans, Franciscans, and Jesuits, for accelerating processes of colonization. The dissemination of Christian art, architecture, and music, and most of all, the agency of indigenous people in their production played a key role in facilitating the acceptance of the new religious and political system. Current research on Peruvian colonial music culture and its role as a vehicle for colonization focus on practices in urban centers. The lack of (written) primary sources seems to turn rural areas into a less attractive research territory for musicologists. This paper advocates for a more inclusive approach. By investigating seventeenth-century pipe organs as material remains of Franciscan missionary music culture, it shows how reactions to colonial forces and Christianization in rural Andean locations could follow tendencies different from those in urban areas. Indigenous musicians in cities tried to 'fit' into the European system in order to be accepted by the ruling Spanish elite. By contrast, the indigenous-built pipe organs in the rural Peruvian Colca-Valley show distinctly native-Andean influences. This paper argues that this syncretism can be interpreted as hybridity in Homi K. Bhabha’s sense, as a means of the colonized to undermine the power of the colonizer and to advance reactionary politics. Not only will it show the necessity of considering rural Peruvian music history in modern scholarship for arriving at a more complete picture of colonial culture, but it will also evidence the advantages of a mixed-methodology approach. Historical organology, combined with concepts from ethnomusicology and post-colonial studies, proves as a useful tool in the absence or scarcity of written primary sources.Keywords: cultural hybridity, music as reactionary politics, Latin American pipe organs, Peruvian colonial music
Procedia PDF Downloads 164818 Effects of Cannabis and Cocaine on Driving Related Tasks of Perception, Cognition, and Action
Authors: Michelle V. Tomczak, Reyhaneh Bakhtiari, Aaron Granley, Anthony Singhal
Abstract:
Objective: Cannabis and cocaine are associated with a range of mental and physical effects that can impair aspects of human behavior. Driving is a complex cognitive behavior that is an essential part of everyday life and can be broken down into many subcomponents, each of which can uniquely impact road safety. With the growing movement of jurisdictions to legalize cannabis, there is an increased focus on impairment and driving. The purpose of this study was to identify driving-related cognitive-performance deficits that are impacted by recreational drug use. Design and Methods: With the assistance of law enforcement agencies, we recruited over 300 participants under the influence of various drugs including cannabis and cocaine. These individuals performed a battery of computer-based tasks scientifically proven to be re-lated to on-road driving performance and designed to test response-speed, memory processes, perceptual-motor skills, and decision making. Data from a control group with healthy non-drug using adults was collected as well. Results: Compared to controls, the drug group showed def-icits in all tasks. The data also showed clear differences between the cannabis and cocaine groups where cannabis users were faster, and performed better on some aspects of the decision-making and perceptual-motor tasks. Memory performance was better in the cocaine group for simple tasks but not more complex tasks. Finally, the participants who consumed both drugs performed most similarly to the cannabis group. Conclusions: Our results show distinct and combined effects of cannabis and cocaine on human performance relating to driving. These dif-ferential effects are likely related to the unique effects of each drug on the human brain and how they distinctly contribute to mental states. Our results have important implications for road safety associated with driver impairment.Keywords: driving, cognitive impairment, recreational drug use, cannabis and cocaine
Procedia PDF Downloads 126817 Poly-ε-Caprolactone Nanofibers with Synthetic Growth Factor Enriched Liposomes as Controlled Drug Delivery System
Authors: Vera Sovkova, Andrea Mickova, Matej Buzgo, Karolina Vocetkova, Eva Filova, Evzen Amler
Abstract:
PCL (poly-ε-caprolactone) nanofibrous scaffolds with adhered liposomes were prepared and tested as a possible drug delivery system for various synthetic growth factors. TGFβ, bFGF, and IGF-I have been shown to increase hMSC (human mesenchymal stem cells) proliferation and to induce hMSC differentiation. Functionalized PCL nanofibers were prepared with synthetic growth factors encapsulated in liposomes adhered to them in three different concentrations. Other samples contained PCL nanofibers with adhered, free synthetic growth factors. The synthetic growth factors free medium served as a control. The interaction of liposomes with the PCL nanofibers was visualized by SEM, and the release kinetics were determined by ELISA testing. The potential of liposomes, immobilized on the biodegradable scaffolds, as a delivery system for synthetic growth factors, and as a suitable system for MSCs adhesion, proliferation and differentiation in vitro was evaluated by MTS assay, dsDNA amount determination, confocal microscopy, flow cytometry and real-time PCR. The results showed that the growth factors adhered to the PCL nanofibers stimulated cell proliferation mainly up to day 11 and that subsequently their effect was lower. By contrast, the release of the lowest concentration of growth factors from liposomes resulted in gradual proliferation of MSCs throughout the experiment. Moreover, liposomes, as well as free growth factors, stimulated type II collagen production, which was confirmed by immunohistochemical staining using monoclonal antibody against type II collagen. The results of this study indicate that growth factors enriched liposomes adhered to surface of PCL nanofibers could be useful as a drug delivery instrument for application in short timescales, be combined with nanofiber scaffolds to promote local and persistent delivery while mimicking the local microenvironment. This work was supported by project LO1508 from the Ministry of Education, Youth and Sports of the Czech RepublicKeywords: drug delivery, growth factors, hMSC, liposomes, nanofibres
Procedia PDF Downloads 289816 Arabic Lexicon Learning to Analyze Sentiment in Microblogs
Authors: Mahmoud B. Rokaya
Abstract:
The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation
Procedia PDF Downloads 188815 Cost Analysis of Neglected Tropical Disease in Nigeria: Implication for Programme Control and Elimination
Authors: Lawong Damian Bernsah
Abstract:
Neglected Tropical Diseases (NTDs) are most predominant among the poor and rural populations and are endemic in 149 countries. These diseases are the most prevalent and responsible for infecting 1.4 billion people worldwide. There are 17 neglected tropical diseases recognized by WHO that constitute the fourth largest disease health and economic burden of all communicable diseases. Five of these 17 diseases are considered for the cost analysis of this paper: lymphatic filariasis, onchocerciasis, trachoma, schistosomiasis, and soil transmitted helminth infections. WHO has proposed a roadmap for eradication and elimination by 2020 and treatments have been donated through the London Declaration by pharmaceutical manufacturers. The paper estimates the cost of NTD control programme and elimination for each NTD disease and total in Nigeria. This is necessary as it forms the bases upon which programme budget and expenditure could be based. Again, given the opportunity cost the resources for NTD face it is necessary to estimate the cost so as to provide bases for comparison. Cost of NTDs control and elimination programme is estimated using the population at risk for each NTD diseases and for the total. The population at risk is gotten from the national master plan for the 2015 - 2020, while the cost per person was gotten for similar studies conducted in similar settings and ranges from US$0.1 to US$0.5 for Mass Administration of Medicine (MAM) and between US$1 to US$1.5 for each NTD disease. The combined cost for all the NTDs was estimated to be US$634.88 million for the period 2015-2020 and US$1.9 billion for each NTD disease for the same period. For the purpose of sensitivity analysis and for robustness of the analysis the cost per person was varied and all were still high. Given that health expenditure for Nigeria (% of GDP) averages 3.5% for the period 1995-2014, it is very clear that efforts have to be made to improve allocation to the health sector in general which is hoped could trickle to NTDs control and elimination. Thus, the government and the donor partners would need to step-up budgetary allocation and also to be aware of the costs of NTD control and elimination programme since they have alternative uses. Key Words: Neglected Tropical Disease, Cost Analysis, NTD Programme Control and Elimination, Cost per PersonKeywords: Neglected Tropical Disease, Cost Analysis, Neglected Tropical Disease Programme Control and Elimination, Cost per Person
Procedia PDF Downloads 273814 Cleaning of Scientific References in Large Patent Databases Using Rule-Based Scoring and Clustering
Authors: Emiel Caron
Abstract:
Patent databases contain patent related data, organized in a relational data model, and are used to produce various patent statistics. These databases store raw data about scientific references cited by patents. For example, Patstat holds references to tens of millions of scientific journal publications and conference proceedings. These references might be used to connect patent databases with bibliographic databases, e.g. to study to the relation between science, technology, and innovation in various domains. Problematic in such studies is the low data quality of the references, i.e. they are often ambiguous, unstructured, and incomplete. Moreover, a complete bibliographic reference is stored in only one attribute. Therefore, a computerized cleaning and disambiguation method for large patent databases is developed in this work. The method uses rule-based scoring and clustering. The rules are based on bibliographic metadata, retrieved from the raw data by regular expressions, and are transparent and adaptable. The rules in combination with string similarity measures are used to detect pairs of records that are potential duplicates. Due to the scoring, different rules can be combined, to join scientific references, i.e. the rules reinforce each other. The scores are based on expert knowledge and initial method evaluation. After the scoring, pairs of scientific references that are above a certain threshold, are clustered by means of single-linkage clustering algorithm to form connected components. The method is designed to disambiguate all the scientific references in the Patstat database. The performance evaluation of the clustering method, on a large golden set with highly cited papers, shows on average a 99% precision and a 95% recall. The method is therefore accurate but careful, i.e. it weighs precision over recall. Consequently, separate clusters of high precision are sometimes formed, when there is not enough evidence for connecting scientific references, e.g. in the case of missing year and journal information for a reference. The clusters produced by the method can be used to directly link the Patstat database with bibliographic databases as the Web of Science or Scopus.Keywords: clustering, data cleaning, data disambiguation, data mining, patent analysis, scientometrics
Procedia PDF Downloads 194813 Designing a Model for Measuring the Components of Good Governance in the Iranian Higher Education System
Authors: Maria Ghorbanian, Mohammad Ghahramani, Mahmood Abolghasemi
Abstract:
Universities and institutions of higher education in Iran, like other higher education institutions in the world, have a heavy mission and task to educate students based on the needs of the country. Taking on such a serious responsibility requires having a good governance system for planning, formulating executive plans, evaluating, and finally modifying them in accordance with the current conditions and challenges ahead. In this regard, the present study was conducted with the aim of identifying the components of good governance in the Iranian higher education system by survey method and with a quantitative approach. In order to collect data, a researcher-made questionnaire was used, which includes two parts: personal and professional characteristics (5 questions) and the three components of good governance in the Iranian higher education system, including good management and leadership (8 items), continuous evaluation and effective (university performance, finance, and university appointments) (8 items) and civic responsibility and sustainable development (7 items). These variables were measured and coded in the form of a five-level Likert scale from "Very Low = 1" to "Very High = 5". First, the validity and reliability of the research model were examined. In order to calculate the reliability of the questionnaire, two methods of Cronbach's alpha and combined reliability were used. Fornell-Larker interaction and criterion were also used to determine the degree of diagnostic validity. The statistical population of this study included all faculty members of public universities in Tehran (N = 4429). The sample size was estimated to be 340 using the Cochran's formula. These numbers were studied using a randomized method with a proportional assignment. The data were analyzed by the structural equation method with the least-squares approach. The results showed that the component of civil responsibility and sustainable development with a factor load of 0.827 is the most important element of good governance.Keywords: good governance, higher education, sustainable, development
Procedia PDF Downloads 171812 Action Research for School Development
Authors: Beate Weyland
Abstract:
The interdisciplinary laboratory EDEN, Educational Environments with Nature, born in 2020 at the Faculty of Education of the Free University of Bolzano, is working on a research path initiated in 2012 on the relationship between pedagogy and architecture in the design process of school buildings. Between 2016 and 2018, advisory support activity for schools was born, which combined the need to qualify the physical spaces of the school with the need to update teaching practices and develop school organization with the aim of improving pupils' and teachers' sense of well-being. The goal of accompanying the development of school communities through research-training paths concerns the process of designing together pedagogical-didactic and architectural environments in which to stage the educational relationship, involving professionals from education, educational research, architecture and design, and local administration. Between 2019 and 2024, more than 30 schools and educational communities throughout Italy have entered into research-training agreements with the university, focusing increasingly on the need to create new spaces and teaching methods capable of imagining educational spaces as places of well-being and where cultural development can be presided over. The paper will focus on the presentation of the research path and on the mixed methods used to support schools and educational communities: identification of the research question, development of the research objective, experimentation, and data collection for analysis and reflection. School and educational communities are involved in a participative and active manner. The quality of the action-research work is enriched by a special focus on the relationship with plants and nature in general. Plants are seen as mediators of processes that unhinge traditional didactics and invite teachers, students, parents, and administrators to think about the quality of learning spaces and relationships based on well-being. The contribution is characterized by a particular focus on research methodologies and tools developed together with teachers to answer the issues raised and to measure the impact of the actions undertaken.Keywords: school development, learning space, wellbeing, plants and nature
Procedia PDF Downloads 36811 Tibial Plateau Fractures During Covid-19 In A Trauma Unit. Impact of Lockdown and The Pressures on the Healthcare Provider
Authors: R. Gwynn, P. Panwalkar, K. Veravalli , M. Tofighi, R. Clement, A. Mofidi
Abstract:
The aim of this study was to access the impact of Covid-19 and lockdown on the incidence, injury pattern, and treatment of tibial plateau fractures in a combined rural and urban population in wales. Methods: Retrospective study was performed to identify tibial plateau fractures in 15-month period of Covid-19 lockdown 15-month period immediately before lockdown. Patient demographics, injury mechanism, injury severity (based on Schatzker classification), and associated injuries, treatment methods, and outcome of fractures in the Covid-19 period was studied. Results: The incidence oftibial plateau fracture was 9 per 100000 during Covid-19, and 8.5 per 100000, and both were similar to previous studies. The average age was 52, and female to male ratio was 1:1 in both control and study group. High energy injury was seen in only 20% of the patients and 35% in the control groups (2=12, p<0025). 14% of the covid-19 population sustained other injuries as opposed 16% in the control group(2=0.09, p>0.95). Lower severity isolated lateral condyle fracturesinjury (Schatzker 1-3) were seen in 40% of fractures this was 60% in the control populations. Higher bicondylar and shaft fractures (Schatzker 5-6) were seen in 60% of the Covid-19 group and 35% in the control groups(2=7.8, p<0.02). Treatment mode was not impacted by Covid-19. The complication rate was low in spite of higher number of complex fractures and the impact of covid-19 pandemic. Conclusion: The associated injuries were similar in spite of a significantly lower mechanism of injury. There were unexpectedly worst tibial plateau fracture based Schatzker classification in the Covid-19 period as compared to the control groups. This was especially relevant for medial condyle and shaft fractures. This was postulated to be caused by reduction in bone density caused by lack of vitamin D and reduction in activity. The treatment mode and outcome was not impacted by the impact of Covid-19 on care for tibial plateau fractures.Keywords: Covid-19, knee, tibial plateau fracture, trauma
Procedia PDF Downloads 125810 Water Harvest and Recycling with Principles of Permaculture in Rural Buildings in Southeastern Anatolia Region, Turkey
Authors: Muhammed Gündoğan
Abstract:
Permaculture is an important source of science and experience that can ensure the integration of sustainable architecture with nature. Since the past, many applications have been applied in rural areas for generations with the principle of benefiting from the self-renewal potential of nature. This culture, which has been transferred from generation to generation with architectural disciplines, has the potential to significantly improve the sustainability of the rural area and is an important guide with its nature-based solution proposals. Şanlıurfa has arid and semi-arid climate characteristics. Although it has substantial agricultural potential, water is limited, especially in rural areas. In the region, rainwater harvesting practices such as artificial water canals and cisterns have been used for a long time. However, these solutions remained mostly at the urban scale, and their reflections at the building scale were restricted and inadequate solutions. Impermeable surfaces are required for water harvesting, but water harvesting is not possible as rural buildings are mostly surrounded by cultivated land. Therefore, existing structures are important in terms of applicability. In this context, considering the typology of Traditional Şanlıurfa Houses, the aim of the project was to create a proposal for limited potable and utility water, which is a serious problem, especially for rural buildings in Şanlıurfa. In the project proposal, roof systems that can work integrated with the structural shape of Traditional Şanlıurfa Houses, rainwater collection systems in the inner courtyard, and greywater recycling were provided. While the average precipitation amount was 453.7 kg/m3 between 1929 and 2012, this value was measured as 622.7 kg/m3 in 2012. Greywater was used to produce natural fertilizers and compost for small-scale fruit and vegetable gardens, and it was combined with the principles of Permaculture to make it a lifestyle. As a result, it has been estimated that a total of 976.4 m3 kg of water can be saved, with an annual average of 158.8 m3 of rainwater recycling and 817.6 m3 of greywater recycling within the scope of the project.Keywords: rural, traditional residential building, permaculture, rainwater harvesting, greywater recycling
Procedia PDF Downloads 131809 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 164808 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 139807 Community-Based Reference Interval of Selected Clinical Chemistry Parameters Among Apparently Healthy Adolescents in Mekelle City, Tigrai, Northern Ethiopia
Authors: Getachew Belay Kassahun
Abstract:
Background: Locally established clinical laboratory reference intervals (RIs) are required to interpret laboratory test results for screening, diagnosis, and prognosis. The objective of this study was to establish a reference interval of clinical chemistry parameters among apparently healthy adolescents aged between 12 and 17 years in Mekelle, Tigrai, in the northern part of Ethiopia. Methods: Community-based cross-sectional study was employed from December 2018 to March 2019 in Mekelle City among 172 males and 172 females based on a Multi-stage sampling technique. Blood samples were tested for Fasting blood sugar (FBS), alanine amino transferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), Creatinine, urea, total protein, albumin (ALB), direct and indirect bilirubin (BIL.D and BIL.T) using 25 Bio system clinical chemistry analyzer. Results were analyzed using SPSS version 23 software and based on the Clinical Laboratory Standard Institute (CLSI)/ International Federation of Clinical Chemistry (IFCC) C 28-A3 Guideline which defines the reference interval as the 95% central range of 2.5th and 97.5th percentiles. Mann Whitney U test, descriptive statistics and box and whisker were statistical tools used for analysis. Results: This study observed statistically significant differences between males and females in ALP, ALT, AST, Urea and Creatinine Reference intervals. The established reference intervals for males and females, respectively, were: ALP (U/L) 79.48-492.12 versus 63.56-253.34, ALT (U/L) 4.54-23.69 versus 5.1-20.03, AST 15.7- 39.1 versus 13.3- 28.5, Urea (mg/dL) 9.33-24.99 versus 7.43-23.11, and Creatinine (mg/dL) 0.393-0.957 versus 0.301-0.846. The combined RIs for Total Protein (g/dL) were 6.08-7.85, ALB (g/dL) 4.42-5.46, FBS(mg/dL) 65-110, BIL.D (mg/dL) 0.033-0.532, and BIL.T (mg/dL) 0.106-0.812. Conclusions: The result showed a marked difference between sex and company-derived values for selected clinical chemistry parameters. Thus, the use of age and sex-specific locally established reference intervals for clinical chemistry parameters is recommended.Keywords: reference interval, adolescent, clinical chemistry, Ethiopia
Procedia PDF Downloads 79806 Fusion of Finger Inner Knuckle Print and Hand Geometry Features to Enhance the Performance of Biometric Verification System
Authors: M. L. Anitha, K. A. Radhakrishna Rao
Abstract:
With the advent of modern computing technology, there is an increased demand for developing recognition systems that have the capability of verifying the identity of individuals. Recognition systems are required by several civilian and commercial applications for providing access to secured resources. Traditional recognition systems which are based on physical identities are not sufficiently reliable to satisfy the security requirements due to the use of several advances of forgery and identity impersonation methods. Recognizing individuals based on his/her unique physiological characteristics known as biometric traits is a reliable technique, since these traits are not transferable and they cannot be stolen or lost. Since the performance of biometric based recognition system depends on the particular trait that is utilized, the present work proposes a fusion approach which combines Inner knuckle print (IKP) trait of the middle, ring and index fingers with the geometrical features of hand. The hand image captured from a digital camera is preprocessed to find finger IKP as region of interest (ROI) and hand geometry features. Geometrical features are represented as the distances between different key points and IKP features are extracted by applying local binary pattern descriptor on the IKP ROI. The decision level AND fusion was adopted, which has shown improvement in performance of the combined scheme. The proposed approach is tested on the database collected at our institute. Proposed approach is of significance since both hand geometry and IKP features can be extracted from the palm region of the hand. The fusion of these features yields a false acceptance rate of 0.75%, false rejection rate of 0.86% for verification tests conducted, which is less when compared to the results obtained using individual traits. The results obtained confirm the usefulness of proposed approach and suitability of the selected features for developing biometric based recognition system based on features from palmar region of hand.Keywords: biometrics, hand geometry features, inner knuckle print, recognition
Procedia PDF Downloads 220805 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 157804 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines
Authors: Arturo Daag
Abstract:
The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction
Procedia PDF Downloads 202803 Housing Precarity and Pathways: Lived Experiences Among Bangladeshi Migrants in Dublin
Authors: Mohammad Altaf Hossain
Abstract:
A growing body of literature in urban studies has presented that urban precarity has been a lived experience for low-income groups of people in the cities of the Global South. It does not necessarily mean that cities in the Global North, where advanced capitalist economies exist, avoided the adverse realities of urban precarity. As a multifaceted condition, it creates other associated precariousness in lives -for example, economic deprivation, mental stress, and housing precarity. The interrelations between urbanity and precarity have been ubiquitous regardless of the developed and developing countries. People, mainly manual labourers with low incomes, go through uncertainties in every aspect of life. By analysing qualitative data and embracing structure-agency interaction, this paper intends to present how Bangladeshi migrants experience housing precarity in Dublin. Continued population growth and political economy factors such as labour market inequality, financialisation of the private rental sector, and the impact of cuts to government funding for social housing provision are combined to produce a housing supply crisis, affordability, and access in the city. As a result, low-income people practice informality in securing jobs and housing. The macro-structural components of this analysis include the Irish housing policy, the European labour market, the immigration policy, and the financialised housing market. The micro-structural components of South Asian communities’ experiences include social networks and social class. Access to social networks and practices of informality play a significant role in enabling them to negotiate urban precarity, including housing crises and income insecurity. In some cases, the collective agency of ethnic diaspora communities plays a vital role in negotiating with structural constraints.Keywords: housing precarity, housing pathways, migration, agency, Dublin
Procedia PDF Downloads 26802 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model
Authors: Anshika Kankane, Dongshik Kang
Abstract:
Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching
Procedia PDF Downloads 106801 Ultrasound Therapy: Amplitude Modulation Technique for Tissue Ablation by Acoustic Cavitation
Authors: Fares A. Mayia, Mahmoud A. Yamany, Mushabbab A. Asiri
Abstract:
In recent years, non-invasive Focused Ultrasound (FU) has been utilized for generating bubbles (cavities) to ablate target tissue by mechanical fractionation. Intensities >10 kW/cm² are required to generate the inertial cavities. The generation, rapid growth, and collapse of these inertial cavities cause tissue fractionation and the process is called Histotripsy. The ability to fractionate tissue from outside the body has many clinical applications including the destruction of the tumor mass. The process of tissue fractionation leaves a void at the treated site, where all the affected tissue is liquefied to particles at sub-micron size. The liquefied tissue will eventually be absorbed by the body. Histotripsy is a promising non-invasive treatment modality. This paper presents a technique for generating inertial cavities at lower intensities (< 1 kW/cm²). The technique (patent pending) is based on amplitude modulation (AM), whereby a low frequency signal modulates the amplitude of a higher frequency FU wave. Cavitation threshold is lower at low frequencies; the intensity required to generate cavitation in water at 10 kHz is two orders of magnitude lower than the intensity at 1 MHz. The Amplitude Modulation technique can operate in both continuous wave (CW) and pulse wave (PW) modes, and the percentage modulation (modulation index) can be varied from 0 % (thermal effect) to 100 % (cavitation effect), thus allowing a range of ablating effects from Hyperthermia to Histotripsy. Furthermore, changing the frequency of the modulating signal allows controlling the size of the generated cavities. Results from in vitro work demonstrate the efficacy of the new technique in fractionating soft tissue and solid calcium carbonate (Chalk) material. The technique, when combined with MR or Ultrasound imaging, will present a precise treatment modality for ablating diseased tissue without affecting the surrounding healthy tissue.Keywords: focused ultrasound therapy, histotripsy, inertial cavitation, mechanical tissue ablation
Procedia PDF Downloads 319800 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique
Authors: Amessalu Atenafu Gelaw, Nele Rath
Abstract:
Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser
Procedia PDF Downloads 156799 Sources and Potential Ecological Risks of Heavy Metals in the Sediment Samples From Coastal Area in Ondo, Southwest Nigeria
Authors: Ogundele Lasun Tunde, Ayeku Oluwagbemiga Patrick
Abstract:
Heavy metals are released into the sediments in aquatic environment from both natural and anthropogenic sources and they are considered as worldwide issue due to their deleterious ecological risks and food chain disruption. In this study, sediments samples were collected at three major sites (Awoye, Abereke and Ayetoro) along Ondo coastal area using VanVeen grab sampler. The concentrations of As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, V and Zn were determined by employing Atomic Absorption Spectroscopy (AAS). The combined concentrations data were subjected to Positive Matrix Factorization (PMF) receptor approach for source identification and apportionment. The probable risks that might be posed by heavy metals in the sediment were estimated by potential and integrated ecological risks indices. Among the measured heavy metals, Fe had the average concentrations of 20.38 ± 2.86, 23.56 ± 4.16 and 25.32 ± 4.83 lg/g at Abereke, Awoye and Ayetoro sites, respectively. The PMF resulted in identification of four sources of heavy metals in the sediments. The resolved sources and their percentage contributions were oil exploration (39%), industrial waste/sludge (35%), detrital process (18%) and Mn-sources (8%). Oil exploration activities and industrial wastes are the major sources that contribute heavy metals into the coastal sediments. The major pollutants that posed ecological risks to the local aquatic ecosystem are As, Pb, Cr and Cd (40 B Ei ≤ 80) classifying the sites as moderate risk. The integrate risks values of Awoye, Abereke and Ayetoro are 231.2, 234.0 and 236.4, respectively suggesting that the study areas had a moderate ecological risk. The study showed the suitability of PMF receptor model for source identification of heavy metals in the sediments. Also, the intensive anthropogenic activities and natural sources could largely discharge heavy metals into the study area, which may increase the heavy metal contents of the sediments and further contribute to the associated ecological risk, thus affecting the local aquatic ecosystem.Keywords: positive matrix factorization, sediments, heavy metals, sources, ecological risks
Procedia PDF Downloads 21798 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 141797 A New Binder Mineral for Cement Stabilized Road Pavements Soils
Authors: Aydın Kavak, Özkan Coruk, Adnan Aydıner
Abstract:
Long-term performance of pavement structures is significantly impacted by the stability of the underlying soils. In situ subgrades often do not provide enough support required to achieve acceptable performance under traffic loading and environmental demands. NovoCrete® is a powder binder-mineral for cement stabilized road pavements soils. NovoCrete® combined with Portland cement at optimum water content increases the crystallize formations during the hydration process, resulting in higher strengths, neutralizes pH levels, and provides water impermeability. These changes in soil properties may lead to transforming existing unsuitable in-situ materials into suitable fill materials. The main features of NovoCrete® are: They are applicable to all types of soil, reduce premature cracking and improve soil properties, creating base and subbase course layers with high bearing capacity by reducing hazardous materials. It can be used also for stabilization of recyclable aggregates and old asphalt pavement aggregate, etc. There are many applications in Germany, Turkey, India etc. In this paper, a few field application in Turkey will be discussed. In the road construction works, this binder material is used for cement stabilization works. In the applications 120-180 kg cement is used for 1 m3 of soil with a 2 % of binder NovoCrete® material for the stabilization. The results of a plate loading test in a road construction site show 1 mm deformation which is very small under 7 kg/cm2 loading. The modulus of subgrade reaction increase from 611 MN/m3 to 3673 MN/m3.The soaked CBR values for stabilized soils increase from 10-20 % to 150-200 %. According to these data weak subgrade soil can be used as a base or sub base after the modification. The potential reduction in the need for quarried materials will help conserve natural resources. The use of on-site or nearby materials in fills, will significantly reduce transportation costs and provide both economic and environmental benefits.Keywords: soil, stabilization, cement, binder, Novocrete, additive
Procedia PDF Downloads 221796 The Integrated Methodological Development of Reliability, Risk and Condition-Based Maintenance in the Improvement of the Thermal Power Plant Availability
Authors: Henry Pariaman, Iwa Garniwa, Isti Surjandari, Bambang Sugiarto
Abstract:
Availability of a complex system of thermal power plant is strongly influenced by the reliability of spare parts and maintenance management policies. A reliability-centered maintenance (RCM) technique is an established method of analysis and is the main reference for maintenance planning. This method considers the consequences of failure in its implementation, but does not deal with further risk of down time that associated with failures, loss of production or high maintenance costs. Risk-based maintenance (RBM) technique provides support strategies to minimize the risks posed by the failure to obtain maintenance task considering cost effectiveness. Meanwhile, condition-based maintenance (CBM) focuses on monitoring the application of the conditions that allow the planning and scheduling of maintenance or other action should be taken to avoid the risk of failure prior to the time-based maintenance. Implementation of RCM, RBM, CBM alone or combined RCM and RBM or RCM and CBM is a maintenance technique used in thermal power plants. Implementation of these three techniques in an integrated maintenance will increase the availability of thermal power plants compared to the use of maintenance techniques individually or in combination of two techniques. This study uses the reliability, risks and conditions-based maintenance in an integrated manner to increase the availability of thermal power plants. The method generates MPI (Priority Maintenance Index) is RPN (Risk Priority Number) are multiplied by RI (Risk Index) and FDT (Failure Defense Task) which can generate the task of monitoring and assessment of conditions other than maintenance tasks. Both MPI and FDT obtained from development of functional tree, failure mode effects analysis, fault-tree analysis, and risk analysis (risk assessment and risk evaluation) were then used to develop and implement a plan and schedule maintenance, monitoring and assessment of the condition and ultimately perform availability analysis. The results of this study indicate that the reliability, risks and conditions-based maintenance methods, in an integrated manner can increase the availability of thermal power plants.Keywords: integrated maintenance techniques, availability, thermal power plant, MPI, FDT
Procedia PDF Downloads 794795 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics
Authors: Titus A. Beu
Abstract:
Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.
Procedia PDF Downloads 119794 Agricultural Organized Areas Approach for Resilience to Droughts, Nutrient Cycle and Rural and Wild Fires
Authors: Diogo Pereira, Maria Moura, Joana Campos, João Nunes
Abstract:
As the Ukraine war highlights the European Economic Area’s vulnerability and external dependence on feed and food, agriculture gains significant importance. Transformative change is necessary to reach a sustainable and resilient agricultural sector. Agriculture is an important drive for bioeconomy and the equilibrium and survival of society and rural fires resilience. The pressure of (1) water stress, (2) nutrient cycle, and (3) social demographic evolution towards 70% of the population in Urban systems and the aging of the rural population, combined with climate change, exacerbates the problem and paradigm of rural and wildfires, especially in Portugal. The Portuguese territory is characterized by (1) 28% of marginal land, (2) the soil quality of 70% of the territory not being appropriate for agricultural activity, (3) a micro smallholding, with less than 1 ha per proprietor, with mainly familiar and traditional agriculture in the North and Centre regions, and (4) having the most vulnerable areas for rural fires in these same regions. The most important difference between the South, North and Centre of Portugal, referring to rural and wildfires, is the agricultural activity, which has a higher level in the South. In Portugal, rural and wildfires represent an average annual economic loss of around 800 to 1000 million euros. The WinBio model is an agrienvironmental metabolism design, with the capacity to create a new agri-food metabolism through Agricultural Organized Areas, a privatepublic partnership. This partnership seeks to grow agricultural activity in regions with (1) abandoned territory, (2) micro smallholding, (3) water and nutrient management necessities, and (4) low agri-food literacy. It also aims to support planning and monitoring of resource use efficiency and sustainability of territories, using agriculture as a barrier for rural and wildfires in order to protect rural population.Keywords: agricultural organized areas, residues, climate change, drought, nutrients, rural and wild fires
Procedia PDF Downloads 78793 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population
Authors: Michael Zaidman, Naum Simanovsky
Abstract:
Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome
Procedia PDF Downloads 64792 Investigating the Motion of a Viscous Droplet in Natural Convection Using the Level Set Method
Authors: Isadora Bugarin, Taygoara F. de Oliveira
Abstract:
Binary fluids and emulsions, in general, are present in a vast range of industrial, medical, and scientific applications, showing complex behaviors responsible for defining the flow dynamics and the system operation. However, the literature describing those highlighted fluids in non-isothermal models is currently still limited. The present work brings a detailed investigation on droplet migration due to natural convection in square enclosure, aiming to clarify the effects of drop viscosity on the flow dynamics by showing how distinct viscosity ratios (droplet/ambient fluid) influence the drop motion and the final movement pattern kept on stationary regimes. The analysis was taken by observing distinct combinations of Rayleigh number, drop initial position, and viscosity ratios. The Navier-Stokes and Energy equations were solved considering the Boussinesq approximation in a laminar flow using the finite differences method combined with the Level Set method for binary flow solution. Previous results collected by the authors showed that the Rayleigh number and the drop initial position affect drastically the motion pattern of the droplet. For Ra ≥ 10⁴, two very marked behaviors were observed accordingly with the initial position: the drop can travel either a helical path towards the center or a cyclic circular path resulting in a closed cycle on the stationary regime. The variation of viscosity ratio showed a significant alteration of pattern, exposing a large influence on the droplet path, capable of modifying the flow’s behavior. Analyses on viscosity effects on the flow’s unsteady Nusselt number were also performed. Among the relevant contributions proposed in this work is the potential use of the flow initial conditions as a mechanism to control the droplet migration inside the enclosure.Keywords: binary fluids, droplet motion, level set method, natural convection, viscosity
Procedia PDF Downloads 119791 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127790 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280