Search results for: train accident
187 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 119186 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini
Abstract:
Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.Keywords: covid-19, stroke, virtual reality, rehabilitation
Procedia PDF Downloads 141185 Overcoming Barriers to Improve HIV Education and Public Health Outcomes in the Democratic Republic of Congo
Authors: Danielle A. Walker, Kyle L. Johnson, Tara B. Thomas, Sandor Dorgo, Jacen S. Moore
Abstract:
Approximately 37 million people worldwide are infected with the Human Immunodeficiency Virus (HIV), with the majority located in sub-Saharan Africa. The relationship existing between HIV incidence and socioeconomic inequity confirms the critical need for programs promoting HIV education, prevention and treatment access. This literature review analyzed 36 sources with a specific focus on the Democratic Republic of Congo, whose critically low socioeconomic status and education rate have resulted in a drastically high HIV rates. Relationships between HIV testing and treatment and barriers to care were explored. Cultural and religious considerations were found to be vital when creating and implementing HIV education and testing programs. Partnerships encouraging active support from community-based spiritual leaders to implement HIV educational programs were also key mechanisms to reach communities and individuals. Gender roles were highlighted as a key component for implementation of effective community trust-building and successful HIV education programs. The efficacy of added support by hospitals and clinics in rural areas to facilitate access to HIV testing and care for people living with HIV/AIDS (PLWHA) was discussed. This review highlighted the need for healthcare providers to provide a network of continued education for PLWHA in clinical settings during disclosure and throughout the course of treatment to increase retention in care and promote medication adherence for viral load suppression. Implementation of culturally sensitive models that rely on community familiarity with HIV educators such as ‘train-the-trainer’ were also proposed as efficacious tools for educating rural communities about HIV. Further research is needed to promote community partnerships for HIV education, understand the cultural context of gender roles as barriers to care, and empower local health care providers to be successful within the HIV Continuum of Care.Keywords: cultural sensitivity, Democratic Republic of the Congo, education, HIV
Procedia PDF Downloads 274184 Proposal for a Framework for Teaching Entrepreneurship and Innovation Using the Methods and Current Methodologies
Authors: Marcelo T. Okano, Jaqueline C. Bueno, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
Developing countries are increasingly finding that entrepreneurship and innovation are the ways to speed up their developments and initiate or encourage technological development. The educational institutions such as universities, colleges and colleges of technology, has two main roles in this process, to guide and train entrepreneurs and provide technological knowledge and encourage innovation. Thus there was completing the triple helix model of innovation with universities, government and industry. But the teaching of entrepreneurship and innovation can not be only the traditional model, with blackboard, chalk and classroom. The new methods and methodologies such as Canvas, elevator pitching, design thinking, etc. require students to get involved and to experience the simulations of business, expressing their ideas and discussing them. The objective of this research project is to identify the main methods and methodologies used for the teaching of entrepreneurship and innovation, to propose a framework, test it and make a case study. To achieve the objective of this research, firstly was a survey of the literature on the entrepreneurship and innovation, business modeling, business planning, Canvas business model, design thinking and other subjects about the themes. Secondly, we developed the framework for teaching entrepreneurship and innovation based on bibliographic research. Thirdly, we tested the framework in a higher education class IT management for a semester. Finally, we detail the results in the case study in a course of IT management. As important results we improve the level of understanding and business administration students, allowing them to manage own affairs. Methods such as canvas and business plan helped students to plan and shape the ideas and business. Pitching for entrepreneurs and investors in the market brought a reality for students. The prototype allowed the company groups develop their projects. The proposed framework allows entrepreneurship education and innovation can leave the classroom, bring the reality of business roundtables to university relying on investors and real entrepreneurs.Keywords: entrepreneurship, innovation, Canvas, traditional model
Procedia PDF Downloads 576183 Comparison of High Speed Railway Bride Foundation Design
Authors: Hussein Yousif Aziz
Abstract:
This paper discussed the design and analysis of bridge foundation subjected to load of train with three codes, namely AASHTO code, British Standard BS Code 8004 (1986), and Chinese code (TB10002.5-2005).The study focused on the design and analysis of bridge’s foundation manually with the three codes and found which code is better for design and controls the problem of high settlement due to the applied loads. The results showed the Chinese codes are costly that the number of reinforcement bars in the pile cap and piles is more than those with AASHTO code and BS code with the same dimensions. Settlement of the bridge was calculated depending on the data collected from the project site. The vertical ultimate bearing capacity of single pile for three codes is also discussed. Other analyses by using the two-dimensional Plaxis program and other programs like SAP2000 14, PROKON many parameters are calculated. The maximum values of the vertical displacement are close to the calculated ones. The results indicate that the AASHTO code is economics and safer in the bearing capacity of single pile. The purpose of this project is to study out the pier on the basis of the design of the pile foundation. There is a 32m simply supported beam of box section on top of the structure. The pier of bridge is round-type. The main component of the design is to calculate pile foundation and the settlement. According to the related data, we choose 1.0m in diameter bored pile of 48m. The pile is laid out in the rectangular pile cap. The dimension of the cap is 12m 9 m. Because of the interaction factors of pile groups, the load-bearing capacity of simple pile must be checked, the punching resistance of pile cap, the shearing strength of pile cap, and the part in bending of pile cap, all of them are very important to the structure stability. Also, checking soft sub-bearing capacity is necessary under the pile foundation. This project provides a deeper analysis and comparison about pile foundation design schemes. Firstly, here are brief instructions of the construction situation about the Bridge. With the actual construction geological features and the upper load on the Bridge, this paper analyzes the bearing capacity and settlement of single pile. In the paper the Equivalent Pier Method is used to calculate and analyze settlements of the piles.Keywords: pile foundation, settlement, bearing capacity, civil engineering
Procedia PDF Downloads 421182 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan
Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov
Abstract:
Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality
Procedia PDF Downloads 106181 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers
Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck
Abstract:
The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.Keywords: fatigue, influence line, life cycle, tied arch bridge
Procedia PDF Downloads 328180 A Conceptual Model of Preparing School Counseling Students as Related Service Providers in the Transition Process
Authors: LaRon A. Scott, Donna M. Gibson
Abstract:
Data indicate that counselor education programs in the United States do not prepare their students adequately to serve students with disabilities nor provide counseling as a related service. There is a need to train more school counselors to provide related services to students with disabilities, for many reasons, but specifically, school counselors are participating in Individualized Education Programs (IEP) and transition planning meetings for students with disabilities where important academic, mental health and post-secondary education decisions are made. While school counselors input is perceived very important to the process, they may not have the knowledge or training in this area to feel confident in offering required input in these meetings. Using a conceptual research design, a model that can be used to prepare school counseling students as related service providers and effective supports to address transition for students with disabilities was developed as a component of this research. The authors developed the Collaborative Model of Preparing School Counseling Students as Related Service Providers to Students with Disabilities, based on a conceptual framework that involves an integration of Social Cognitive Career Theory (SCCT) and evidenced-based practices based on Self-Determination Theory (SDT) to provide related and transition services and planning with students with disabilities. The authors’ conclude that with five overarching competencies, (1) knowledge and understanding of disabilities, (2) knowledge and expertise in group counseling to students with disabilities, (3), knowledge and experience in specific related service components, (4) knowledge and experience in evidence-based counseling interventions, (5) knowledge and experiencing in evidenced-based transition and career planning services, that school counselors can enter the field with the necessary expertise to adequately serve all students. Other examples and strategies are suggested, and recommendations for preparation programs seeking to integrate a model to prepare school counselors to implement evidenced-based transition strategies in supporting students with disabilities are includedKeywords: transition education, social cognitive career theory, self-determination, counseling
Procedia PDF Downloads 243179 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 105178 Analyzing the Place of Technology in Communication: Case Study of Kenya during COVID-19
Authors: Josephine K. Mule, Levi Obonyo
Abstract:
Technology has changed human life over time. The COVID-19 pandemic has altered the work set-up, the school system, the shopping experience, church attendance, and even the way athletes train in Kenya. Although the use of technology to communicate and maintain interactions has been on the rise in the last 30 years, the uptake during the COVID-19 pandemic has been unprecedented. Traditionally, ‘paid’ work has been considered to take place outside the “home house” but COVID-19 has resulted in what is now being referred to as “the world’s largest work-from-home experiment” with up to 43 percent of employees working at least some of the time remotely. This study was conducted on 90 respondents from across remote work set-ups, school systems, merchants and customers of online shopping, church leaders and congregants and athletes, and their coaches. Data were collected by questionnaires and interviews that were conducted online. The data is based on the first three months since the first case of coronavirus was reported in Kenya. This study found that the use of technology is in the center of working remotely with work interactions being propelled on various online platforms including, Zoom, Microsoft Teams, and Google Meet, among others. The school system has also integrated the use of technology, including students defending their thesis/dissertations online and university graduations being conducted virtually. Kenya is known for its long-distance runners, due to the directives to reduce interactions; coaches have taken to providing their athletes with guidance on training on social media using applications such as WhatsApp. More local stores are now offering the shopping online option to their customers. Churches have also felt the brunt of the situation, especially because of the restrictions on crowds resulting in online services becoming more popular in 2020 than ever before. Artists, innovatively have started online musical concerts. The findings indicate that one of the outcomes in the Kenyan society that is evident as a result of the COVID-19 period is a population that is using technology more to communicate and get work done. Vices that have thrived in this season where the use of technology has increased, include the spreading of rumors on social media and cyberbullying. The place of technology seems to have been cemented by demand during this period.Keywords: communication, coronavirus, COVID-19, Kenya, technology
Procedia PDF Downloads 139177 Robotic Exoskeleton Response During Infant Physiological Knee Kinematics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 118176 Exoskeleton Response During Infant Physiological Knee Kinematics And Dynamics
Authors: Breanna Macumber, Victor A. Huayamave, Emir A. Vela, Wangdo Kim, Tamara T. Chamber, Esteban Centeno
Abstract:
Spina bifida is a type of neural tube defect that affects the nervous system and can lead to problems such as total leg paralysis. Treatment requires physical therapy and rehabilitation. Robotic exoskeletons have been used for rehabilitation to train muscle movement and assist in injury recovery; however, current models focus on the adult populations and not on the infant population. The proposed framework aims to couple a musculoskeletal infant model with a robotic exoskeleton using vacuum-powered artificial muscles to provide rehabilitation to infants affected by spina bifida. The study that drove the input values for the robotic exoskeleton used motion capture technology to collect data from the spontaneous kicking movement of a 2.4-month-old infant lying supine. OpenSim was used to develop the musculoskeletal model, and Inverse kinematics was used to estimate hip joint angles. A total of 4 kicks (A, B, C, D) were selected, and the selection was based on range, transient response, and stable response. Kicks had at least 5° of range of motion with a smooth transient response and a stable period. The robotic exoskeleton used a Vacuum-Powered Artificial Muscle (VPAM) the structure comprised of cells that were clipped in a collapsed state and unclipped when desired to simulate infant’s age. The artificial muscle works with vacuum pressure. When air is removed, the muscle contracts and when air is added, the muscle relaxes. Bench testing was performed using a 6-month-old infant mannequin. The previously developed exoskeleton worked really well with controlled ranges of motion and frequencies, which are typical of rehabilitation protocols for infants suffering with spina bifida. However, the random kicking motion in this study contained high frequency kicks and was not able to accurately replicate all the investigated kicks. Kick 'A' had a greater error when compared to the other kicks. This study has the potential to advance the infant rehabilitation field.Keywords: musculoskeletal modeling, soft robotics, rehabilitation, pediatrics
Procedia PDF Downloads 83175 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 147174 Risks and Values in Adult Safeguarding: An Examination of How Social Workers Screen Safeguarding Referrals from Residential Homes
Authors: Jeremy Dixon
Abstract:
Safeguarding adults forms a core part of social work practice. The Government in England and Wales has made efforts to standardise practices through The Care Act 2014. The Act states that local authorities have duties to make inquiries in cases where an adult with care or support needs is experiencing or at risk of abuse and is unable to protect themselves from abuse or neglect. Despite the importance given to safeguarding adults within law there remains little research about how social workers conduct such decisions on the ground. This presentation reports on findings from a pilot research study conducted within two social work teams in a Local Authority in England. The objective of the project was to find out how social workers interpreted safeguarding duties as laid out by The Care Act 2014 with a particular focus on how workers assessed and managed risk. Ethnographic research methods were used throughout the project. This paper focusses specifically on decisions made by workers in the assessment team. The paper reports on qualitative observation and interviews with five workers within this team. Drawing on governmentality theory, this paper analyses the techniques used by workers to manage risk from a distance. A high proportion of safeguarding referrals came from care workers or managers in residential care homes. Social workers conducting safeguarding assessments were aware that they had a duty to work in partnership with these agencies. However, their duty to safeguard adults also meant that they needed to view them as potential abusers. In making judgments about when it was proportionate to refer for a safeguarding assessment workers drew on a number of common beliefs about residential care workers which were then tested in conversations with them. Social workers held the belief that residential homes acted defensively, leading them to report any accident or danger. Social workers therefore encouraged residential workers to consider whether statutory criteria had been met and to use their own procedures to manage risk. In addition social workers carried out an assessment of the workers’ motives; specifically whether they were using safeguarding procedures as a shortcut for avoiding other assessments or as a means of accessing extra resources. Where potential abuse was identified social workers encouraged residential homes to use disciplinary policies as a means of isolating and managing risk. The study has implications for understanding risk within social work practice. It shows that whilst social workers use law to govern individuals, these laws are interpreted against cultural values. Additionally they also draw on assumptions about the culture of others.Keywords: adult safeguarding, governmentality, risk, risk assessment
Procedia PDF Downloads 288173 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 257172 The Application of the Biopsychosocial-Spiritual Model to the Quality of Life of People Living with Sickle Cell Disease
Authors: Anita Paddy, Millicent Obodai, Lebbaeus Asamani
Abstract:
The management of sickle cell disease requires a multidisciplinary team for better outcomes. Thus, literature on the application of the biopsychosocial model for the management and explanation of chronic pain in sickle cell disease (SCD) and other chronic diseases abound. However, there is limited research on the use of the biopsychosocial model, together with a spiritual component (biopsychosocial-spiritual model). The study investigated the extent to which healthcare providers utilized the biopsychosocial-spiritual model in the management of chronic pain to improve the quality of life (QoL) of patients with SCD. This study employed the descriptive survey design involving a consecutive sampling of 261 patients with SCD who were between the ages of 18 to 79 years and were accessing hematological services at the Clinical Genetics Department of the Korle Bu Teaching Hospital. These patients willingly consented to participate in the study by appending their signatures. The theory of integrated quality of life, the gate control theory of pain and the biopsychosocial(spiritual) model were tested. An instrument for the biopsychosocial-spiritual model was developed, with a basis from the literature reviewed, while the World Health Organisation Quality of Life BREF (WHOQoLBref) and the spirituality rating scale were adapted and used for data collection. Data were analyzed using descriptive statistics (means, standard deviations, frequencies, and percentages) and partial least square structural equation modeling. The study revealed that healthcare providers had a great leaning toward the biological domain of the model compared to the other domains. Hence, participants’ QoL was not fully improved as suggested by the biopsychosocial(spiritual) model. Again, the QoL and spirituality of patients with SCD were quite high. A significant negative impact of spirituality on QoL was also found. Finally, the biosocial domain of the biopsychosocial-spiritual model was the most significant predictor of QoL. It was recommended that policymakers train healthcare providers to integrate the psychosocial-spiritual component in health services. Also, education on SCD and its resultant impact from the domains of the model should be intensified while health practitioners consider utilizing these components fully in the management of the condition.Keywords: biopsychosocial (spritual), sickle cell disease, quality of life, healthcare, accra
Procedia PDF Downloads 73171 Factors Affecting the Operations of Vocational and Technical Training Institutions in Zambia: A Case of Lusaka and Southern Provinces in Zambia
Authors: Jabulani Mtshiya, Yasmin Sultana-Muchindu
Abstract:
Technical and Vocational Education (TVE) is the platform on which developed nations have built their economic foundations, which have led them to attain high standards of living. Zambia has put up educational systems aimed at empowering the citizens and building the economy. Nations such as China, the United States America, and several other European nations are such examples. Despite having programs in Technical and Vocations Education, the Zambian economy still lags, and the industries contributing merger to Gross Domestic Product. This study addresses the significance of Technical and Vocational Education and how it can improve the livelihood of citizens. It addresses aspects of development and productivity and highlights the problems faced by learners in Lusaka and Southern provinces in Zambia. The study employed qualitative research design in data collection and a method of descriptive data analysis was used in order to bring out the description of the prevailing state of affairs in TVE in the perspective of learners. This meant that the respondents indicated their views and thoughts toward TVE. The study collected information through research questionnaires. The findings showed that TVE is regarded important by government and various stakeholders and that it is also regarded important by learners. The findings also showed that stakeholders and society need to pay particular attention to the development of TVE in order to improve the livelihood of citizens and to improve the national economy. Just like any other developed nation that used TVE to develop their industries, Zambia also has the potential to train its youth and to equip them with the necessary skills required for them to contribute positively to the growth of industries and the growth of the economy. Deliberate steps need to be taken by the government and stakeholders to apply and make firm the TVE policies that were laid. At the end of the study recommendations were made; that government should put in the right measures in order to harness the potential at hand. Further on, recommendations were made to carry out this research at the national level and also to conduct it using the quantitative research method, and that government should be consistent to its obligations of funding and maintaining TVE institutions in order for them to be able to operate effectively.Keywords: education, technical, training, vocational
Procedia PDF Downloads 161170 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297169 Implementing a Screening Tool to Assist with Palliative Care Consultation in Adult Non-ICU Patients
Authors: Cassey Younghans
Abstract:
Background: Current health care trends demonstrate that there is an increasing number of patients being hospitalized with complex comorbidities. These complex needs require advanced therapies, and treatment goals often focus on doing everything possible to prolong life rather than focusing on the individual patient’s quality of life which is the goal of palliative care efforts. Patients benefit from palliative care in the early stages of the illness rather than after the disease progressed or the state of acuity has advanced. The clinical problem identified was that palliative care was not being implemented early enough in the disease process with patients who had complex medical conditions and who would benefit from the philosophy and skills of palliative care professionals. Purpose: The purpose of this quality improvement study was to increase the number of palliative care screenings and consults completed on adults after being admitted to one Non-ICU and Non-COVID hospital unit. Methods: A retrospective chart review assessing for possible missed opportunities to introduce palliation was performed for patients with six primary diagnoses, including heart failure, liver failure, end stage renal disease, chronic obstructive pulmonary disease, cerebrovascular accident, and cancer in a population of adults over the ago of 19 on one medical-surgical unit over a three-month period prior to the intervention. An educational session with the nurses on the benefits of palliative care was conducted by the researcher, and a screening tool was implemented. The expected outcome was to have an increase in early palliative care consultation with patients with complex comorbid conditions and a decrease in missed opportunities for the implementation of palliative care. Another retrospective chart review was completed following completion of the three month piloting of the tool. Results: During the retrospective chart review, 46 patients were admitted to the medical-surgical floor with the primary diagnoses identified in the inclusion criteria. Six patients had palliative care consults completed during that time. Twenty-two palliative care screening tools were completed during the intervention period. Of those, 15 of the patients scored a 7 or higher, suggesting that a palliative care consultation was warranted. The final retrospective chart review identified that 4 palliative consults were implemented during that time of the 31 patients who were admitted over the three month time frame. Conclusion: Educating nurses and implementing a palliative care screening upon admission can be of great value in providing early identification of patients who might benefit from palliative care. Recommendations – It is recommended that this screening tool should be used to help identify the patents of whom would benefit from a palliative care consult, and nurses would be able to initiated a palliative care consultation themselves.Keywords: palliative care, screening, early, palliative care consult
Procedia PDF Downloads 152168 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars
Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic
Abstract:
Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing
Procedia PDF Downloads 358167 A Single-Use Endoscopy System for Identification of Abnormalities in the Distal Oesophagus of Individuals with Chronic Reflux
Authors: Nafiseh Mirabdolhosseini, Jerry Zhou, Vincent Ho
Abstract:
The dramatic global rise in acid reflux has also led to oesophageal adenocarcinoma (OAC) becoming the fastest-growing cancer in developed countries. While gastroscopy with biopsy is used to diagnose OAC patients, this labour-intensive and expensive process is not suitable for population screening. This study aims to design, develop, and implement a minimally invasive system to capture optical data of the distal oesophagus for rapid screening of potential abnormalities. To develop the system and understand user requirements, a user-centric approach was employed by utilising co-design strategies. Target users’ segments were identified, and 38 patients and 14 health providers were interviewed. Next, the technical requirements were developed based on consultations with the industry. A minimally invasive optical system was designed and developed considering patient comfort. This system consists of the sensing catheter, controller unit, and analysis program. Its procedure only takes 10 minutes to perform and does not require cleaning afterward since it has a single-use catheter. A prototype system was evaluated for safety and efficacy for both laboratory and clinical performance. This prototype performed successfully when submerged in simulated gastric fluid without showing evidence of erosion after 24 hours. The system effectively recorded a video of the mid-distal oesophagus of a healthy volunteer (34-year-old male). The recorded images were used to develop an automated program to identify abnormalities in the distal oesophagus. Further data from a larger clinical study will be used to train the automated program. This system allows for quick visual assessment of the lower oesophagus in primary care settings and can serve as a screening tool for oesophageal adenocarcinoma. In addition, this system is able to be coupled with 24hr ambulatory pH monitoring to better correlate oesophageal physiological changes with reflux symptoms. It also can provide additional information on lower oesophageal sphincter functions such as opening times and bolus retention.Keywords: endoscopy, MedTech, oesophageal adenocarcinoma, optical system, screening tool
Procedia PDF Downloads 87166 Quality Assessment of the Given First Aid on the Spot Events in the Opinion of Members of the Teams of the Medical Rescue in Warsaw in Poland
Authors: Aneta Binkowska, Artur Kamecki
Abstract:
The ability to provide first aid should be one of the basic skills of each of us. First aid by the Law on National Medical Emergency dated 8 September 2006 as amended, is a set of actions undertaken to save a person at the scene of an accident. In Poland, on the basis of Article 162 of the Criminal Code, we are obliged to provide first aid to the victim. In addition, according to a large part of society, unselfishness towards others in need of help is our moral obligation. The aim of the study was to learn the opinion of the members of Medical Rescue Teams (MRT) of the ‘Meditrans’ Provincial Ambulance and Sanitary Transport Service (PA and STS ‘Meditrans’) in Warsaw on how people react in real situations threatening life or health of the injured person. The study was conducted in the third quarter of 2015 on 335 members of medical rescue teams, including 77 W and 258 M, who provided medical services in the ‘Meditrans’ Provincial Ambulance and Sanitary Transport Service MRT in Warsaw. The research tool was an anonymous questionnaire survey of own design, which consisted of 12 questions: closed, half open and one open question. Respondents were divided into 3 age groups and by individual medical professions (doctor, paramedic, nurse). The straight majority of respondents encountered granting the first aid the event on the spot. However, the frequency of appearing in such proceedings isn’t too high. The first aid has most often been given in the street and in houses. The final audited fairly important element is the reason not to provide first aid by bystanders in the opinion of members of the medical teams. Respondents to this question, which was an open question were asked to name the reason for not taking any action while waiting for an ambulance. Over 50% of respondents could not answer. The most common answers were: fear, lack of knowledge and skills, reluctance, indifference, lack of training, lack of experience and fear that harm. Conclusion: The majority of respondents have encountered instances of first aid provision, but respondents assessed the frequency of such situations as low. Placing the victim in the recovery position is the simplest and most common form of first aid. Therefore, training should be introduced not only on CPR but also in the scope of helping persons in sudden health emergency, who do not have a sudden cardiac arrest. A statement can be formulated, as a main conclusion of the analysis, that only continuous education and in particular practical training will help people to overcome the barrier of their limitations in order to help others. Among the largest group of witnesses providing first aid are the elderly and youth, who are subjected to various forms of education related to first aid provision.Keywords: BLS, first aid, medical rescue, resuscitation
Procedia PDF Downloads 152165 A Review on the Vulnerability of Rural-Small Scale Farmers to Insect Pest Attacks in the Eastern Cape Province, South Africa
Authors: Nolitha L. Skenjana, Bongani P. Kubheka, Maxwell A. Poswal
Abstract:
The Eastern Cape Province of South Africa is characterized by subsistence farming, which is mostly distributed in the rural areas of the province. It is estimated that cereal crops such as maize and sorghum, and vegetables such as cabbage are grown in more than 400.000 rural households, with maize being the most dominant crop. However, compared to commercial agriculture, small-scale farmers receive minimal support from research and development, limited technology transfer on the latest production practices and systems and have poor production infrastructure and equipment. Similarly, there is limited farmers' appreciation on best practices in insect pest management and control. The paper presents findings from the primary literature and personal observations on insect pest management practices of small-scale farmers in the province. Inferences from literature and personal experiences in the production areas have led to a number of deductions regarding the level of exposure and extent of vulnerability. Farmers' pest management practices, which included not controlling at all though there is a pest problem, resulted in their crop stands to be more vulnerable to pest attacks. This became more evident with the recent brown locust, African armyworm, and Fall armyworm outbreaks, and with the incidences of opportunistic phytophagous insects previously collected on wild hosts only, found causing serious damages on crops. In most of these occurrences, damage to crops resulted in low or no yield. Improvements on farmers' reaction and response to pest problems were only observed in areas where focused awareness campaigns and trainings on specific pests and their management techniques were done. This then calls for a concerted effort from all role players in the sphere of small-scale crop production, to train and equip farmers with relevant skills, and provide them with information on affordable and climate-smart strategies and technologies in order to create a state of preparedness. This is necessary for the prevention of substantial crop losses that may exacerbate food insecurity in the province.Keywords: Eastern Cape Province, small-scale farmers, insect pest management, vulnerability
Procedia PDF Downloads 140164 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 183163 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images
Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang
Abstract:
Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network
Procedia PDF Downloads 92162 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 160161 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 118160 End-Users Tools to Empower and Raise Awareness of Behavioural Change towards Energy Efficiency
Authors: G. Calleja-Rodriguez, N. Jimenez-Redondo, J. J. Peralta Escalante
Abstract:
This research work aims at developing a solution to take advantage of the potential energy saving related to occupants behaviour estimated in between 5-30 % according to existing studies. For that purpose, the following methodology has been followed: 1) literature review and gap analysis, 2) define concept and functional requirements, 3) evaluation and feedback by experts. As result, the concept for a tool-box that implements continuous behavior change interventions named as engagement methods and based on increasing energy literacy, increasing energy visibility, using bonus system, etc. has been defined. These engagement methods are deployed through a set of ICT tools: Building Automation and Control System (BACS) add-ons services installed in buildings and Users Apps installed in smartphones, smart-TVs or dashboards. The tool-box called eTEACHER identifies energy conservation measures (ECM) based on energy behavioral change through a what-if analysis that collects information about the building and its users (comfort feedback, behavior, etc.) and carry out cost-effective calculations to provide outputs such us efficient control settings of building systems. This information is processed and showed in an attractive way as tailored advice to the energy end-users. Therefore, eTEACHER goal is to change the behavior of building´s energy users towards energy efficiency, comfort and better health conditions by deploying customized ICT-based interventions taking into account building typology (schools, residential, offices, health care centres, etc.), users profile (occupants, owners, facility managers, employers, etc.) as well as cultural and demographic factors. One of the main findings of this work is the common failure when technological interventions on behavioural change are done to not consult, train and support users regarding technological changes leading to poor performance in practices. As conclusion, a strong need to carry out social studies to identify relevant behavioural issues and to identify effective pro-evironmental behavioral change strategies has been identified.Keywords: energy saving, behavioral bhange, building users, engagement methods, energy conservation measures
Procedia PDF Downloads 170159 A Study on the Establishment of Performance Evaluation Criteria for MR-Based Simulation Device to Train K-9 Self-Propelled Artillery Operators
Authors: Yonggyu Lee, Byungkyu Jung, Bom Yoon, Jongil Yoon
Abstract:
MR-based simulation devices have been recently used in various fields such as entertainment, medicine, manufacturing, and education. Different simulation devices are also being developed for military equipment training. This is to address the concerns regarding safety accidents as well as cost issues associated with training with expensive equipment. An important aspect of developing simulation devices to replicate military training is that trainees experience the same effect as training with real devices. In this study, the criteria for performance evaluation are established to compare the training effect of an MR-based simulation device to that of an actual device. K-9 Self-propelled artillery (SPA) operators are selected as training subjects. First, MR-based software is developed to simulate the training ground and training scenarios currently used for training SPA operators in South Korea. Hardware that replicates the interior of SPA is designed, and a simulation device that is linked to the software is developed. Second, criteria are established to evaluate the simulation device based on real-life training scenarios. A total of nine performance evaluation criteria were selected based on the actual SPA operation training scenarios. Evaluation items were selected to evaluate whether the simulation device was designed such that trainees would experience the same effect as training in the field with a real SPA. To eval-uate the level of replication by the simulation device of the actual training environments (driving and passing through trenches, pools, protrusions, vertical obstacles, and slopes) and driving conditions (rapid steering, rapid accelerating, and rapid braking) as per the training scenarios, tests were performed under the actual training conditions and in the simulation device, followed by the comparison of the results. In addition, the level of noise felt by operators during training was also selected as an evaluation criterion. Due to the nature of the simulation device, there may be data latency between HW and SW. If the la-tency in data transmission is significant, the VR image information delivered to trainees as they maneuver HW might not be consistent. This latency in data transmission was also selected as an evaluation criterion to improve the effectiveness of the training. Through this study, the key evaluation metrics were selected to achieve the same training effect as training with real equipment in a training ground during the develop-ment of the simulation device for military equipment training.Keywords: K-9 self-propelled artillery, mixed reality, simulation device, synchronization
Procedia PDF Downloads 66158 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 170