Search results for: automated driving
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1740

Search results for: automated driving

300 Optical Coherence Tomography in Parkinson’s Disease: A Potential in-vivo Retinal α-Synuclein Biomarker in Parkinson’s Disease

Authors: Jessica Chorostecki, Aashka Shah, Fen Bao, Ginny Bao, Edwin George, Navid Seraji-Bozorgzad, Veronica Gorden, Christina Caon, Elliot Frohman

Abstract:

Background: Parkinson’s Disease (PD) is a neuro degenerative disorder associated with the loss of dopaminergic cells and the presence α-synuclein (AS) aggregation in of Lewy bodies. Both dopaminergic cells and AS are found in the retina. Optical coherence tomography (OCT) allows high-resolution in-vivo examination of retinal structure injury in neuro degenerative disorders including PD. Methods: We performed a cross-section OCT study in patients with definite PD and healthy controls (HC) using Spectral Domain SD-OCT platform to measure the peripapillary retinal nerve fiber layer (pRNFL) thickness and total macular volume (TMV). We performed intra-retinal segmentation with fully automated segmentation software to measure the volume of the RNFL, ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), and the outer nuclear layer (ONL). Segmentation was performed blinded to the clinical status of the study participants. Results: 101 eyes from 52 PD patients (mean age 65.8 years) and 46 eyes from 24 HC subjects (mean age 64.1 years) were included in the study. The mean pRNFL thickness was not significantly different (96.95 μm vs 94.42 μm, p=0.07) but the TMV was significantly lower in PD compared to HC (8.33 mm3 vs 8.58 mm3 p=0.0002). Intra-retinal segmentation showed no significant difference in the RNFL volume between the PD and HC groups (0.95 mm3 vs 0.92 mm3 p=0.454). However, GCL, IPL, INL, and ONL volumes were significantly reduced in PD compared to HC. In contrast, the volume of OPL was significantly increased in PD compared to HC. Conclusions: Our finding of the enlarged OPL corresponds with mRNA expression studies showing localization of AS in the OPL across vertebrate species and autopsy studies demonstrating AS aggregation in the deeper layers of retina in PD. We propose that the enlargement of the OPL may represent a potential biomarker of AS aggregation in PD. Longitudinal studies in larger cohorts are warranted to confirm our observations that may have significant implications in disease monitoring and therapeutic development.

Keywords: Optical Coherence Tomography, biomarker, Parkinson's disease, alpha-synuclein, retina

Procedia PDF Downloads 414
299 Assessment and Optimisation of Building Services Electrical Loads for Off-Grid or Hybrid Operation

Authors: Desmond Young

Abstract:

In building services electrical design, a key element of any project will be assessing the electrical load requirements. This needs to be done early in the design process to allow the selection of infrastructure that would be required to meet the electrical needs of the type of building. The type of building will define the type of assessment made, and the values applied in defining the maximum demand for the building, and ultimately the size of supply or infrastructure required, and the application that needs to be made to the distribution network operator, or alternatively to an independent network operator. The fact that this assessment needs to be undertaken early in the design process provides limits on the type of assessment that can be used, as different methods require different types of information, and sometimes this information is not available until the latter stages of a project. A common method applied in the earlier design stages of a project, typically during stages 1,2 & 3, is the use of benchmarks. It is a possibility that some of the benchmarks applied are excessive in relation to the current loads that exist in a modern installation. This lack of accuracy is based on information which does not correspond to the actual equipment loads that are used. This includes lighting and small power loads, where the use of more efficient equipment and lighting has reduced the maximum demand required. The electrical load can be used as part of the process to assess the heat generated from the equipment, with the heat gains from other sources, this feeds into the sizing of the infrastructure required to cool the building. Any overestimation of the loads would contribute to the increase in the design load for the heating and ventilation systems. Finally, with the new policies driving the industry to decarbonise buildings, a prime example being the recently introduced London Plan, loads are potentially going to increase. In addition, with the advent of the pandemic and changes to working practices, and the adoption of electric heating and vehicles, a better understanding of the loads that should be applied will aid in ensuring that infrastructure is not oversized, as a cost to the client, or undersized to the detriment of the building. In addition, more accurate benchmarks and methods will allow assessments to be made for the incorporation of energy storage and renewable technologies as these technologies become more common in buildings new or refurbished.

Keywords: energy, ADMD, electrical load assessment, energy benchmarks

Procedia PDF Downloads 84
298 The Association of Anthropometric Measurements, Blood Pressure Measurements, and Lipid Profiles with Mental Health Symptoms in University Students

Authors: Ammaarah Gamieldien

Abstract:

Depression is a very common and serious mental illness that has a significant impact on both the social and economic aspects of sufferers worldwide. This study aimed to investigate the association between body mass index (BMI), blood pressure, and lipid profiles with mental health symptoms in university students. Secondary objectives included the associations between the variables (BMI, blood pressure, and lipids) with themselves, as they are key factors in cardiometabolic disease. Sixty-three (63) students participated in the study. Thirty-two (32) were assigned to the control group (minimal-mild depressive symptoms), while 31 were assigned to the depressive group (moderate to severe depressive symptoms). Montgomery-Asberg Depression Rating Scale (MADRS) and Beck Depression Inventory (BDI) were used to assess depressive scores. Anthropometric measurements such as weight (kg), height (m), waist circumference (WC), and hip circumference were measured. Body mass index (BMI) and ratios such as waist-to-hip ratio (WHR) and waist-to-height ratio (WtHR) were also calculated. Blood pressure was measured using an automated AfriMedics blood pressure machine, while lipids were measured using a CardioChek plus analyzer machine. Statistics were analyzed via the SPSS statistics program. There were no significant associations between anthropometric measurements and depressive scores (p > 0.05). There were no significant correlations between lipid profiles and depression when running a Spearman’s rho correlation (P > 0.05). However, total cholesterol and LDL-C were negatively associated with depression, and triglycerides were positively associated with depression after running a point-biserial correlation (P < 0.05). Overall, there were no significant associations between blood pressure measurements and depression (P > 0.05). However, there was a significant moderate positive correlation between systolic blood pressure and MADRS scores in males (P < 0.05). Depressive scores positively and strongly correlated to how long it takes participants to fall asleep. There were also significant associations with regard to the secondary objectives. This study indicates the importance of determining the prevalence of depression among university students in South Africa. If the prevalence and factors associated with depression are addressed, depressive symptoms in university students may be improved.

Keywords: depression, blood pressure, body mass index, lipid profiles, mental health symptoms

Procedia PDF Downloads 40
297 Synthesis of Microencapsulated Phase Change Material for Adhesives with Thermoregulating Properties

Authors: Christin Koch, Andreas Winkel, Martin Kahlmeyer, Stefan Böhm

Abstract:

Due to environmental regulations on greenhouse gas emissions and the depletion of fossil fuels, there is an increasing interest in electric vehicles.To maximize their driving range, batteries with high storage capacities are needed. In most electric cars, rechargeable lithium-ion batteries are used because of their high energy density. However, it has to be taken into account that these batteries generate a large amount of heat during the charge and discharge processes. This leads to a decrease in a lifetime and damage to the battery cells when the temperature exceeds the defined operating range. To ensure an efficient performance of the battery cells, reliable thermal management is required. Currently, the cooling is achieved by heat sinks (e.g., cooling plates) bonded to the battery cells with a thermally conductive adhesive (TCA) that directs the heat away from the components. Especially when large amounts of heat have to be dissipated spontaneously due to peak loads, the principle of heat conduction is not sufficient, so attention must be paid to the mechanism of heat storage. An efficient method to store thermal energy is the use of phase change materials (PCM). Through an isothermal phase change, PCM can briefly absorb or release thermal energy at a constant temperature. If the phase change takes place in the transition from solid to liquid, heat is stored during melting and is released to the ambient during the freezing process upon cooling. The presented work displays the great potential of thermally conductive adhesives filled with microencapsulated PCM to limit peak temperatures in battery systems. The encapsulation of the PCM avoids the effects of aging (e.g., migration) and chemical reactions between the PCM and the adhesive matrix components. In this study, microencapsulation has been carried out by in situ polymerization. The microencapsulated PCM was characterized by FT-IR spectroscopy, and the thermal properties were measured by DSC and laser flash method. The mechanical properties, electrical and thermal conductivity, and adhesive toughness of the TCA/PCM composite were also investigated.

Keywords: phase change material, microencapsulation, adhesive bonding, thermal management

Procedia PDF Downloads 51
296 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System

Authors: Abisha Isaac Mohanlal

Abstract:

Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.

Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery

Procedia PDF Downloads 203
295 Preventive Effects of Motorcycle Helmets on Clinical Outcomes in Motorcycle Crashes

Authors: Seung Chul Lee, Jooyeong Kim, Ki Ok Ahn, Juok Park

Abstract:

Background: Injuries caused by motorcycle crashes are one of the major public health burdens leading to high mortality, functional disability. The risk of death among motorcyclists is 30 times greater than that among car drivers, with head injuries the leading cause of death. The motorcycle helmet is crucial protective equipment for motorcyclists. Aims: This study aimed to measure the protective effect of motorcycle helmet use on intracranial injury and mortality and to compare the preventive effect in drivers and passengers. Methods: This is a cross-sessional study based on the Emergency Department (ED)–based Injury In-depth Surveillance (EDIIS) database from 23 EDs in Korea. All of the trauma patients injured in motorcycle crashes between January 1, 2013 and December 31, 2016 were eligible, excluding cases with unknown helmet use and outcomes. The primary and secondary outcomes were intracranial injury and in-hospital mortality. We calculated adjusted odds ratios (AORs) of helmet use for study outcomes after adjusting for potential confounders. Using interaction models, we compared the protective effect of helmet use on outcomes across driving status (driver and passenger). Results: Among 17,791 eligible patients, 10,668 (60.0%) patients were wearing helmets at the time of the crash, 2,128 (12.0%) patients had intracranial injuries and 331 (1.9%) patients had in-hospital death. 16,381 (92.1%) patients were drivers and 1410 (7.9%) patients were passengers. 62.6% of drivers and 29.1% of passengers were wearing helmets at the time of the crash. Compared to un-helmeted group, the helmeted group was less likely to have an intracranial injury(8.0% vs. 17.9%, AOR: 0.43 (0.39-0.48)) and in-hospital mortality (1.0% vs. 3.2%, AOR: 0.29 (0.22-0.37)).In the interaction model, AORs (95% CIs) of helmet use for intracranial injury were 0.42 (0.38-0.47) in drivers and 0.61(0.41-0.90) in passengers, respectively. There was a significant preventive effect of helmet use on in-hospital mortality in drivers (AOR: 0.26(0.21–0.34)). Discussion and conclusions: Wearing helmets in motorcycle crashes reduced intracranial injuries and in-hospital mortality. The preventive effect of motorcycle helmet use on intracranial injury was stronger in drivers than in passengers. There was a significant preventive effect of helmet use on in-hospital mortality in driver but not in passengers. Public health efforts to increase motorcycle helmet use are needed to reduce health burden from injuries caused by motorcycle crashes.

Keywords: intracranial injury, helmet, mortality, motorcycle crashes

Procedia PDF Downloads 163
294 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 129
293 Impacts of Climate Change on Water Resources Management in the Mahi River Basin of India

Authors: Y. B. Sharma, K. B. Biswas

Abstract:

This research project examines a 5000 cal yr BP sediment core record to reveal the consequences of human impact and climate variability on the tropical dry forests of the Mahi river basin, western India. To date there has been little research to assess the impact of climate variability and human impact on the vegetation dynamics of this region. There has also been little work to link changes in vegetation cover to documented changes in the basin hydrology over the past 100 years – although it is assumed that the two are closely linked. The key objective of this research project therefore is to understand the driving mechanisms responsible for the abrupt changes in the Mahi river basin as detailed in historical documentation and its impact on water resource management. The Mahi river basin is located in western India (22° 11’-24° 35’ N 72° 46’-74° 52’ E). Mahi river arises in the Malwa Plateau, Madhya Pradesh near Moripara and flows through the uplands and alluvial plain of Rajasthan and Gujarat provinces before draining into the Gulf of Cambay. Palaeoecological procedures (sedimentology, geochemical analysis, C&N isotopes and fossil pollen evidences) have been applied on sedimentary sequences collected from lakes in the Mahi basin. These techniques then facilitate to reconstruct the soil erosion, nutrient cycling, vegetation changes and climatic variability over the last 5000 years. Historical documentation detailing changes in demography, climate and landscape use over the past 100 years in this region will also be collated to compare with the most recent palaeoecological records. The results of the research work provide a detailed record of vegetation change, soil erosion, changes in aridity, and rainfall patterns in the region over the past 5000 years. This research therefore aims to determine the drivers of change and natural variability in the basin. Such information is essential for its current and future management including restoration.

Keywords: human impact, climate variability, vegetation cover, hydrology, water resource management, Mahi river basin, sedimentology, geochemistry, fossil pollen, nutrient cycling, vegetation changes, palaeoecology, aridity, rainfall, drivers of change

Procedia PDF Downloads 349
292 Digital Immunity System for Healthcare Data Security

Authors: Nihar Bheda

Abstract:

Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.

Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology

Procedia PDF Downloads 39
291 Graduate School of Biotechnology and Bioengineering/ YuanZe University

Authors: Sankhanil Das, Arunava Dasgupta, Keya Mitra

Abstract:

This paper investigates the relationship between natural ecological systems and modern urban morphology. Over years, ecological conditions represented by natural resources such as natural landforms, systems of water, urban geography and land covers have been a significant driving factor of how settlements have formed, expanded and functioned. These have played a pivotal role in formation of the community character and the cultural identity of the urban spaces, and have steered cultural behavior within these settings. Such cultural behaviors have been instrumental in transforming mere spaces to places with meaning and symbolism. The natural process of city formation is principally founded upon the idea of balance and harmony, mostly in a subconscious manner. Reimaging such processes of natural evolution, this paper systematically builds a development model that generates a balance between environment and development, with specific focus on the Urban-Rural fringe areas in the Temple Town of Puri, in Eastern India. Puri represents a unique cross section of ecological landscape, cultural practices and religious symbolism with a very rich history and a vibrant heritage. While the city centre gets more and more crowded by tourists and pilgrims to accommodate related businesses, the original residents of Puri relocate to move towards the urban peripheral areas for better living conditions, gradually converting agricultural lands into non agricultural uses. This rapid spread into the rural hinterland is devoid of any connection with the rich cultural identity of Puri. These past four decades of ‘development’ has been at the cost of 810 Hectares of ecological Lake systems in the region. Invaluable ecological resources at urban rural edges are often viewed as hindrances to development and conceptualized as taking away from the image of the city. This paper attempts to understand the language of development over years on existing natural resources through topo-analysis and proposes a sustainable approach of development using different planning tools, with ecological resources as the pivotal factor of development.

Keywords: livability, sustainable development, urbanization, urban-rural edge

Procedia PDF Downloads 165
290 Adsorption: A Decision Maker in the Photocatalytic Degradation of Phenol on Co-Catalysts Doped TiO₂

Authors: Dileep Maarisetty, Janaki Komandur, Saroj S. Baral

Abstract:

In the current work, photocatalytic degradation of phenol was carried both in UV and visible light to find the slowest step that is limiting the rate of photo-degradation process. Characterization such as XRD, SEM, FT-IR, TEM, XPS, UV-DRS, PL, BET, UPS, ESR and zeta potential experiments were conducted to assess the credibility of catalysts in boosting the photocatalytic activity. To explore the synergy, TiO₂ was doped with graphene and alumina. The orbital hybridization with alumina doping (mediated by graphene) resulted in higher electron transfer from the conduction band of TiO₂ to alumina surface where oxygen reduction reactions (ORR) occur. Besides, the doping of alumina and graphene introduced defects into Ti lattice and helped in improving the adsorptive properties of modified photo-catalyst. Results showed that these defects promoted the oxygen reduction reactions (ORR) on the catalyst’s surface. ORR activity aims at producing reactive oxygen species (ROS). These ROS species oxidizes the phenol molecules which is adsorbed on the surface of photo-catalysts, thereby driving the photocatalytic reactions. Since mass transfer is considered as rate limiting step, various mathematical models were applied to the experimental data to probe the best fit. By varying the parameters, it was found that intra-particle diffusion was the slowest step in the degradation process. Lagergren model gave the best R² values indicating the nature of rate kinetics. Similarly, different adsorption isotherms were employed and realized that Langmuir isotherm suits the best with tremendous increase in uptake capacity (mg/g) of TiO₂-rGO-Al₂O₃ as compared undoped TiO₂. This further assisted in higher adsorption of phenol molecules. The results obtained from experimental, kinetic modelling and adsorption isotherms; it is concluded that apart from changes in surface, optoelectronic and morphological properties that enhanced the photocatalytic activity, the intra-particle diffusion within the catalyst’s pores serve as rate-limiting step in deciding the fate of photo-catalytic degradation of phenol.

Keywords: ORR, phenol degradation, photo-catalyst, rate kinetics

Procedia PDF Downloads 123
289 The Regulation of Reputational Information in the Sharing Economy

Authors: Emre Bayamlıoğlu

Abstract:

This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.

Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy

Procedia PDF Downloads 446
288 Prompting and Encouraging Community Hydration through Education: A Realist Review and Evaluation Exploring Hydration in a Population at Risk of Frailty

Authors: Mark Davies, Carolyn Wallace, Christina Lloydwin, Tom Powell

Abstract:

Background: Frailty is increasingly recognized as a public health problem within an aging population. It is often characterized as an accumulation of clinical symptoms with progressive decline. We contend that dehydration is potentially the missing link driving the cycle of frailty; it contributes to malnutrition and cognitive decline and is a risk factor for other conditions. Frailty may also impact on fluid intake in cognitively intact older adults, indicating the cyclical nature of dehydration contributing to increasing frailty. Aim: To examine the relationships between fluid, hydration, and frailty in older adults in order to determine what works, for whom, how, why, and in what circumstances. Methods: A Realist Synthesis was first undertaken with n=50 studies, leading to the development of a Refined Programme Theory (RPT) articulating what hydration interventions work, for whom, to what degree, in what contexts, and how & why. Within the subsequent evaluation, the RPT was further confirmed/refuted/refined following semi-structured interviews with n=8 participants (healthcare professionals and patients). The RAMESES Quality Standards were followed throughout the study. Results: The Refined Programme Theory (RPT) highlighted three factors that result in optimized hydration for frail older people, i.e., Developing an Understanding Around Hydration, Empowering Participation, and System Reconfiguration. Our RPT indicates that hydration interventions work by developing an understanding of the importance of hydration, mitigating physical & cognitive barriers, increasing the agency of the patient, using a prompting process to reinforce drinking behavior, and routinizing hydration as a dimension of overall care. Conclusion: The study indicates that a greater understanding of the importance of hydration is required for all parties. Patients also require physical and psychological support if they are to be active agents in meeting their hydration needs. At a wider ‘system’ level, organizations must work in an integrated manner introducing processes that enable continuing professional development (CPD), encourage ongoing holistic assessment, and routinize hydration support.

Keywords: frailty, dehydration, older adults, realist review, realist evaluation

Procedia PDF Downloads 51
287 Institutional Quality and Tax Compliance: A Cross-Country Regression Evidence

Authors: Debi Konukcu Onal, Tarkan Cavusoglu

Abstract:

In modern societies, the costs of public goods and services are shared through taxes paid by citizens. However, taxation has always been a frictional issue, as tax obligations are perceived to be a financial burden for taxpayers rather than being merit that fulfills the redistribution, regulation and stabilization functions of the welfare state. The tax compliance literature evolves into discussing why people still pay taxes in systems with low costs of legal enforcement. Related empirical and theoretical works show that a wide range of socially oriented behavioral factors can stimulate voluntary compliance and subversive effects as well. These behavioral motivations are argued to be driven by self-enforcing rules of informal institutions, either independently or through interactions with legal orders set by formal institutions. The main focus of this study is to investigate empirically whether institutional particularities have a significant role in explaining the cross-country differences in the tax noncompliance levels. A part of the controversy about the driving forces behind tax noncompliance may be attributed to the lack of empirical evidence. Thus, this study aims to fill this gap through regression estimates, which help to trace the link between institutional quality and noncompliance on a cross-country basis. Tax evasion estimates of Buehn and Schneider is used as the proxy measure for the tax noncompliance levels. Institutional quality is quantified by three different indicators (percentile ranks of Worldwide Governance Indicators, ratings of the International Country Risk Guide, and the country ratings of the Freedom in the World). Robust Least Squares and Threshold Regression estimates based on the sample of the Organization for Economic Co-operation and Development (OECD) countries imply that tax compliance increases with institutional quality. Moreover, a threshold-based asymmetry is detected in the effect of institutional quality on tax noncompliance. That is, the negative effects of tax burdens on compliance are found to be more pronounced in countries with institutional quality below a certain threshold. These findings are robust to all alternative indicators of institutional quality, supporting the significant interaction of societal values with the individual taxpayer decisions.

Keywords: institutional quality, OECD economies, tax compliance, tax evasion

Procedia PDF Downloads 109
286 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 53
285 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving

Authors: Svenja Pieritz, Jakab Pilaszanovich

Abstract:

Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.

Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing

Procedia PDF Downloads 113
284 Adaption of the Design Thinking Method for Production Planning in the Meat Industry Using Machine Learning Algorithms

Authors: Alica Höpken, Hergen Pargmann

Abstract:

The resource-efficient planning of the complex production planning processes in the meat industry and the reduction of food waste is a permanent challenge. The complexity of the production planning process occurs in every part of the supply chain, from agriculture to the end consumer. It arises from long and uncertain planning phases. Uncertainties such as stochastic yields, fluctuations in demand, and resource variability are part of this process. In the meat industry, waste mainly relates to incorrect storage, technical causes in production, or overproduction. The high amount of food waste along the complex supply chain in the meat industry could not be reduced by simple solutions until now. Therefore, resource-efficient production planning by conventional methods is currently only partially feasible. The realization of intelligent, automated production planning is basically possible through the application of machine learning algorithms, such as those of reinforcement learning. By applying the adapted design thinking method, machine learning methods (especially reinforcement learning algorithms) are used for the complex production planning process in the meat industry. This method represents a concretization to the application area. A resource-efficient production planning process is made available by adapting the design thinking method. In addition, the complex processes can be planned efficiently by using this method, since this standardized approach offers new possibilities in order to challenge the complexity and the high time consumption. It represents a tool to support the efficient production planning in the meat industry. This paper shows an elegant adaption of the design thinking method to apply the reinforcement learning method for a resource-efficient production planning process in the meat industry. Following, the steps that are necessary to introduce machine learning algorithms into the production planning of the food industry are determined. This is achieved based on a case study which is part of the research project ”REIF - Resource Efficient, Economic and Intelligent Food Chain” supported by the German Federal Ministry for Economic Affairs and Climate Action of Germany and the German Aerospace Center. Through this structured approach, significantly better planning results are achieved, which would be too complex or very time consuming using conventional methods.

Keywords: change management, design thinking method, machine learning, meat industry, reinforcement learning, resource-efficient production planning

Procedia PDF Downloads 107
283 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 112
282 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 63
281 Liver Regeneration of Small in situ Injury

Authors: Ziwei Song, Junjun Fan, Jeremy Teo, Yang Yu, Yukun Ma, Jie Yan, Shupei Mo, Lisa Tucker-Kellogg, Peter So, Hanry Yu

Abstract:

Liver is the center of detoxification and exposed to toxic metabolites all the time. It is highly regenerative after injury, with the ability to restore even after 70% partial hepatectomy. Most of the previous studies were using hepatectomy as injury models for liver regeneration study. There is limited understanding of small-scale liver injury, which can be caused by either low dose drug consumption or hepatocyte routine metabolism. Although these small in situ injuries do not cause immediate symptoms, repeated injuries will lead to aberrant wound healing in liver. Therefore, the cellular dynamics during liver regeneration is critical for our understanding of liver regeneration mechanism. We aim to study the liver regeneration of small-scale in situ liver injury in transgenic mice labeling actin (Lifeact-GFP). Previous studies have been using sample sections and biopsies of liver, which lack real-time information. In order to trace every individual hepatocyte during the regeneration process, we have developed and optimized an intravital imaging system that allows in vivo imaging of mouse liver for consecutive 5 days, allowing real-time cellular tracking and quantification of hepatocytes. We used femtosecond-laser ablation to make controlled and repeatable liver injury model, which mimics the real-life small in situ liver injury. This injury model is the first case of its kind for in vivo study on liver. We found that small-scale in situ liver injury is repaired by the coordination of hypertrophy and migration of hepatocytes. Hypertrophy is only transient at initial phase, while migration is the main driving force to complete the regeneration process. From cellular aspect, Akt/mTOR pathway is activated immediately after injury, which leads to transient hepatocyte hypertrophy. From mechano-sensing aspect, the actin cable, formed at apical surface of wound proximal hepatocytes, provides mechanical tension for hepatocyte migration. This study provides important information on both chemical and mechanical signals that promote liver regeneration of small in situ injury. We conclude that hypertrophy and migration play a dominant role at different stages of liver regeneration.

Keywords: hepatocyte, hypertrophy, intravital imaging, liver regeneration, migration

Procedia PDF Downloads 187
280 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 164
279 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 330
278 Logistics and Supply Chain Management Using Smart Contracts on Blockchain

Authors: Armen Grigoryan, Milena Arakelyan

Abstract:

The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.

Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management

Procedia PDF Downloads 71
277 A Survey and Analysis on Inflammatory Pain Detection and Standard Protocol Selection Using Medical Infrared Thermography from Image Processing View Point

Authors: Mrinal Kanti Bhowmik, Shawli Bardhan Jr., Debotosh Bhattacharjee

Abstract:

Human skin containing temperature value more than absolute zero, discharges infrared radiation related to the frequency of the body temperature. The difference in infrared radiation from the skin surface reflects the abnormality present in human body. Considering the difference, detection and forecasting the temperature variation of the skin surface is the main objective of using Medical Infrared Thermography(MIT) as a diagnostic tool for pain detection. Medical Infrared Thermography(MIT) is a non-invasive imaging technique that records and monitors the temperature flow in the body by receiving the infrared radiated from the skin and represent it through thermogram. The intensity of the thermogram measures the inflammation from the skin surface related to pain in human body. Analysis of thermograms provides automated anomaly detection associated with suspicious pain regions by following several image processing steps. The paper represents a rigorous study based survey related to the processing and analysis of thermograms based on the previous works published in the area of infrared thermal imaging for detecting inflammatory pain diseases like arthritis, spondylosis, shoulder impingement, etc. The study also explores the performance analysis of thermogram processing accompanied by thermogram acquisition protocols, thermography camera specification and the types of pain detected by thermography in summarized tabular format. The tabular format provides a clear structural vision of the past works. The major contribution of the paper introduces a new thermogram acquisition standard associated with inflammatory pain detection in human body to enhance the performance rate. The FLIR T650sc infrared camera with high sensitivity and resolution is adopted to increase the accuracy of thermogram acquisition and analysis. The survey of previous research work highlights that intensity distribution based comparison of comparable and symmetric region of interest and their statistical analysis assigns adequate result in case of identifying and detecting physiological disorder related to inflammatory diseases.

Keywords: acquisition protocol, inflammatory pain detection, medical infrared thermography (MIT), statistical analysis

Procedia PDF Downloads 323
276 Effective Use of X-Box Kinect in Rehabilitation Centers of Riyadh

Authors: Reem Alshiha, Tanzila Saba

Abstract:

Physical rehabilitation is the process of helping people to recover and be able to go back to their former activities that have been delayed due to external factors such as car accidents, old age and victims of strokes (chronic diseases and accidents, and those related to sport activities).The cost of hiring a personal nurse or driving the patient to and from the hospital could be costly and time-consuming. Also, there are other factors to take into account such as forgetfulness, boredom and lack of motivation. In order to solve this dilemma, some experts came up with rehabilitation software to be used with Microsoft Kinect to help the patients and their families for in-home rehabilitation. In home rehabilitation software is becoming more and more popular, since it is more convenient for all parties affiliated with the patient. In contrast to the other costly market-based systems that have no portability, Microsoft’s Kinect is a portable motion sensor that reads body movements and interprets it. New software development has made rehabilitation games available to be used at home for the convenience of the patient. The game will benefit its users (rehabilitation patients) in saving time and money. There are many software's that are used with the Kinect for rehabilitation, but the software that is chosen in this research is Kinectotherapy. Kinectotherapy software is used for rehabilitation patients in Riyadh clinics to test its acceptance by patients and their physicians. In this study, we used Kinect because it was affordable, portable and easy to access in contrast to expensive market-based motion sensors. This paper explores the importance of in-home rehabilitation by using Kinect with Kinectotherapy software. The software targets both upper and lower limbs, but in this research, the main focus is on upper-limb functionality. However, the in-home rehabilitation is applicable to be used by all patients with motor disability, since the patient must have some self-reliance. The targeted subjects are patients with minor motor impairment that are somewhat independent in their mobility. The presented work is the first to consider the implementation of in-home rehabilitation with real-time feedback to the patient and physician. This research proposes the implementation of in-home rehabilitation in Riyadh, Saudi Arabia. The findings show that most of the patients are interested and motivated in using the in-home rehabilitation system in the future. The main value of the software application is due to these factors: improve patient engagement through stimulating rehabilitation, be a low cost rehabilitation tool and reduce the need for expensive one-to-one clinical contact. Rehabilitation is a crucial treatment that can improve the quality of life and confidence of the patient as well as their self-esteem.

Keywords: x-box, rehabilitation, physical therapy, rehabilitation software, kinect

Procedia PDF Downloads 321
275 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 167
274 Aromatic Medicinal Plant Classification Using Deep Learning

Authors: Tsega Asresa Mengistu, Getahun Tigistu

Abstract:

Computer vision is an artificial intelligence subfield that allows computers and systems to retrieve meaning from digital images. It is applied in various fields of study self-driving cars, video surveillance, agriculture, Quality control, Health care, construction, military, and everyday life. Aromatic and medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, and other natural health products for therapeutic and Aromatic culinary purposes. Herbal industries depend on these special plants. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs, and going to export not only industrial raw materials but also valuable foreign exchange. There is a lack of technologies for the classification and identification of Aromatic and medicinal plants in Ethiopia. The manual identification system of plants is a tedious, time-consuming, labor, and lengthy process. For farmers, industry personnel, academics, and pharmacists, it is still difficult to identify parts and usage of plants before ingredient extraction. In order to solve this problem, the researcher uses a deep learning approach for the efficient identification of aromatic and medicinal plants by using a convolutional neural network. The objective of the proposed study is to identify the aromatic and medicinal plant Parts and usages using computer vision technology. Therefore, this research initiated a model for the automatic classification of aromatic and medicinal plants by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides the root, flower and fruit, latex, and barks. The study was conducted on aromatic and medicinal plants available in the Ethiopian Institute of Agricultural Research center. An experimental research design is proposed for this study. This is conducted in Convolutional neural networks and Transfer learning. The Researcher employs sigmoid Activation as the last layer and Rectifier liner unit in the hidden layers. Finally, the researcher got a classification accuracy of 66.4 in convolutional neural networks and 67.3 in mobile networks, and 64 in the Visual Geometry Group.

Keywords: aromatic and medicinal plants, computer vision, deep convolutional neural network

Procedia PDF Downloads 397
273 The Effects of Globalization on Health: A Case of Kenyatta National Hospital Healthcare Services

Authors: S. Ithai, A. Oloo

Abstract:

The emergence of globalization has cultivated an international consensus that without economic development; it is very unlikely that a country may realize social or political development. It is equally important to note that the economic effect on social development automatically influence the country healthcare services as healthcare systems are improved and adopted. For decades and before 1980's, the colonial and the Governments of Kenya had pursued a goal to provide free healthcare services to its citizen with minimal success; but as population increased, this endeavor became almost a mirage. The challenge called for a change of strategy with introduction of cost sharing which also could not guarantee sustainability of healthcare services in the country due to increased number of poor people and poverty. An involvement of multisectral approach to provision of health individual, collaboration and adoption of all dimensions through globalization provides a ray of hope to not only economic, political and social development but also guaranteed equitable and reliable healthcare systems in Kenya and specifically referral healthcare services at KNH. With the advent of globalization, KNH has made positive strides that have guaranteed patients with reliable healthcare services. These include increased donor funding, collaboration levels, training and research as well as enhanced the hospital relations with international partners. During this period, the hospital has increased number of local doctors and nurses, enhanced transfer of skills, innovations and technologies which are driving forces to quality and efficient healthcare services. The period has also brought in challenges for the hospital which include increased competition, attraction of qualified nurses and doctors to international are some the issues that have made the hospital to spend more resources in research and development in order to stay afloat. This paper reveals the link between globalization and healthcare and its influence on institution policy choice. However, the process is not expected to take place automatically without institutional initiatives if KNH is to reap the benefits of globalization. KNH need to make use of the existing infrastructure, human resources and donor confidence, the opportunities that are indeed important in propelling KNH toward Vision 2030 and achieving the desired Millennium Development Goals (MDGs).

Keywords: globalization, Kenyatta National Hospital, native, healthcare

Procedia PDF Downloads 317
272 Development of a 3D Model of Real Estate Properties in Fort Bonifacio, Taguig City, Philippines Using Geographic Information Systems

Authors: Lyka Selene Magnayi, Marcos Vinas, Roseanne Ramos

Abstract:

As the real estate industry continually grows in the Philippines, Geographic Information Systems (GIS) provide advantages in generating spatial databases for efficient delivery of information and services. The real estate sector is not only providing qualitative data about real estate properties but also utilizes various spatial aspects of these properties for different applications such as hazard mapping and assessment. In this study, a three-dimensional (3D) model and a spatial database of real estate properties in Fort Bonifacio, Taguig City are developed using GIS and SketchUp. Spatial datasets include political boundaries, buildings, road network, digital terrain model (DTM) derived from Interferometric Synthetic Aperture Radar (IFSAR) image, Google Earth satellite imageries, and hazard maps. Multiple model layers were created based on property listings by a partner real estate company, including existing and future property buildings. Actual building dimensions, building facade, and building floorplans are incorporated in these 3D models for geovisualization. Hazard model layers are determined through spatial overlays, and different scenarios of hazards are also presented in the models. Animated maps and walkthrough videos were created for company presentation and evaluation. Model evaluation is conducted through client surveys requiring scores in terms of the appropriateness, information content, and design of the 3D models. Survey results show very satisfactory ratings, with the highest average evaluation score equivalent to 9.21 out of 10. The output maps and videos obtained passing rates based on the criteria and standards set by the intended users of the partner real estate company. The methodologies presented in this study were found useful and have remarkable advantages in the real estate industry. This work may be extended to automated mapping and creation of online spatial databases for better storage, access of real property listings and interactive platform using web-based GIS.

Keywords: geovisualization, geographic information systems, GIS, real estate, spatial database, three-dimensional model

Procedia PDF Downloads 143
271 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 282