Search results for: geographic referenced information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11097

Search results for: geographic referenced information

297 We Have Never Seen a Dermatologist. Prisons Telederma Project Reaching the Unreachable Through Teledermatology

Authors: Innocent Atuhe, Babra Nalwadda, Grace Mulyowa, Annabella Habinka Ejiri

Abstract:

Background: Atopic Dermatitis (AD) is one of the most prevalent and growing chronic inflammatory skin diseases in African prisons. AD care is limited in African due to a lack of information about the disease amongst primary care workers, limited access to dermatologists, lack of proper training of healthcare workers, and shortage of appropriate treatments. We designed and implemented the Prisons Telederma project based on the recommendations of the International Society of Atopic Dermatitis. We aimed at; i) increase awareness and understanding of teledermatology among prison health workers and ii) improve treatment outcomes of prisoners with atopic dermatitis through increased access to and utilization of consultant dermatologists through teledermatology in Uganda prisons. Approach: We used Store-and-forward Teledermatology (SAF-TD) to increase access to dermatologist-led care for prisoners and prison staff with AD. We conducted five days of training for prison health workers using an adapted WHO training guide on recognizing neglected tropical diseases through changes on the skin together with an adapted American Academy of Dermatology (AAD) Childhood AD Basic Dermatology Curriculum designed to help trainees develop a clinical approach to the evaluation and initial management of patients with AD. This training was followed by blended e-learning, webinars facilitated by consultant Dermatologists with local knowledge of medication and local practices, apps adjusted for pigmented skin, WhatsApp group discussions, and sharing pigmented skin AD pictures and treatment via zoom meetings. We hired a team of Ugandan Senior Consultant dermatologists to draft an iconographic atlas of the main dermatoses in pigmented African skin and shared this atlas with prison health staff for use as a job aid. We had planned to use MySkinSelfie mobile phone application to take and share skin pictures of prisoners with AD with Consultant Dermatologists, who would review the pictures and prescribe appropriate treatment. Unfortunately, the National Health Service withdrew the app from the market due to technical issues. We monitored and evaluated treatment outcomes using the Patient-Oriented Eczema Measure (POEM) tool. We held four advocacy meetings to persuade relevant stakeholders to increase supplies and availability of first-line AD treatments such as emollients in prison health facilities. Results: We have the very first iconographic atlas of the main dermatoses in pigmented African skin. We increased; i) the proportion of prison health staff with adequate knowledge of AD and teledermatology from 20% to 80%; ii) the proportion of prisoners with AD reporting improvement in disease severity (POEM scores) from 25% to 35% in one year; iii) increased proportion of prisoners with AD seen by consultant dermatologist through teledermatology from 0% to 20% in one year and iv)Increased the availability of AD recommended treatments in prisons health facilities from 5% to 10% in one year. Our study contributes to the use, evaluation, and verification of the use of teledermatology to increase access to specialist dermatology services to the most hard to reach areas and vulnerable populations such as that of prisoners.

Keywords: teledermatology, prisoners, reaching, un-reachable

Procedia PDF Downloads 101
296 The Effect of Photochemical Smog on Respiratory Health Patients in Abuja Nigeria

Authors: Christabel Ihedike, John Mooney, Monica Price

Abstract:

Summary: This study aims to critically evaluate effect of photochemical smog on respiratory health in Nigeria. Cohort of chronic obstructive pulmonary disease (COPD) patients was recruited from two large hospitals in Abuja Nigeria. Respiratory health questionnaires, daily diaries, dyspnoea scale and lung function measurement were used to obtain health data and investigate the relationship with air quality data (principally ozone, NOx and particulate pollution). Concentrations of air pollutants were higher than WHO and Nigerian air quality standard. The result suggests a correlation between measured air quality and exacerbation of respiratory illness. Introduction: Photochemical smog is a significant health challenge in most cities and its effect on respiratory health is well acknowledged. This type of pollution is most harmful to the elderly, children and those with underlying respiratory disease. This study aims to investigate impact of increasing temperature and photo-chemically generated secondary air pollutants on respiratory health in Abuja Nigeria. Method and Result: Health data was collected using spirometry to measure lung function on routine attendance at the clinic, daily diaries kept by patients and information obtained using respiratory questionnaire. Questionnaire responses (obtained using an adapted and internally validated version of St George’s Hospital Respiratory Questionnaire), shows that ‘time of wheeze’ showed an association with participants activities: 30% had worse wheeze in the morning: 10% cannot shop, 15% take long-time to get washed, 25% walk slower, 15% if hurry have to stop and 5% cannot take-bath. There was also a decrease in Forced expiratory volume in the first second and Forced Vital Capacity, and daily change in the afternoon–morning may be associated with the concentration level of pollutants. Also, dyspnoea symptoms recorded that 60% of patients were on grade 3, 25% grade 2 and 15% grade 1. Daily frequency of the number of patients in the cohort that cough /brought sputum is 78%. Air pollution in the city is higher than Nigerian and WHO standards with NOx and PM10 concentrations of 693.59ug/m-3 and 748ugm-3 being measured respectively. The result shows that air pollution may increase occurrence and exacerbation of respiratory disease. Conclusion: High temperature and local climatic conditions in urban Nigeria encourages formation of Ozone, the major constituent of photochemical smog, resulting also in the formation of secondary air pollutants associated with health challenges. In this study we confirm the likely potency of the pattern of secondary air pollution in exacerbating COPD symptoms in vulnerable patient group in urban Nigeria. There is need for better regulation and measures to reduce ozone, particularly when local climatic conditions favour development of photochemical smog in such settings. Climate change and likely increasing temperatures add impetus and urgency for better air quality standards and measures (traffic-restrictions and emissions standards) in developing world settings such as Nigeria.

Keywords: Abuja-Nigeria, effect, photochemical smog, respiratory health

Procedia PDF Downloads 224
295 The Senior Traveler Market as a Competitive Advantage for the Luxury Hotel Sector in the UK Post-Pandemic

Authors: Feyi Olorunshola

Abstract:

Over the last few years, the senior travel market has been noted for its potential in the wider tourism industry. The tourism sector includes the hotel and hospitality, travel, transportation, and several other subdivisions to make it economically viable. In particular, the hotel attracts a substantial part of the expenditure in tourism activities as when people plan to travel, suitable accommodation for relaxation, dining, entertainment and so on is paramount to their decision-making. The global retail value of the hotel as of 2018 was significant for tourism. But, despite indications of the hotel to the tourism industry at large, very few empirical studies are available to establish how this sector can leverage on the senior demographic to achieve competitive advantage. Predominantly, studies on the mature market have focused on destination tourism, with a limited investigation on the hotel which makes a significant contribution to tourism. Also, several scholarly studies have demonstrated the importance of the senior travel market to the hotel, yet there is very little empirical research in the field which has explored the driving factors that will become the accepted new normal for this niche segment post-pandemic. Giving that the hotel already operates in a highly saturated business environment, and on top of this pre-existing challenge, the ongoing global health outbreak has further put the sector in a vulnerable position. Therefore, the hotel especially the full-service luxury category must evolve rapidly for it to survive in the current business environment. The hotel can no longer rely on corporate travelers to generate higher revenue since the unprecedented wake of the pandemic in 2020 many organizations have invented a different approach of conducting their businesses online, therefore, the hotel needs to anticipate a significant drop in business travellers. However, the rooms and the rest of the facilities must be occupied to keep their business operating. The way forward for the hotel lies in the leisure sector, but the question now is to focus on the potential demographics of travelers, in this case, the seniors who have been repeatedly recognized as the lucrative market because of increase discretionary income, availability of time and the global population trends. To achieve the study objectives, a mixed-method approach will be utilized drawing on both qualitative (netnography) and quantitative (survey) methods, cognitive and decision-making theories (means-end chain) and competitive theories to identify the salient drivers explaining senior hotel choice and its influence on their decision-making. The target population are repeated seniors’ age 65 years and over who are UK resident, and from the top tourist market to the UK (USA, Germany, and France). Structural equation modelling will be employed to analyze the datasets. The theoretical implication is the development of new concepts using a robust research design, and as well as advancing existing framework to hotel study. Practically, it will provide the hotel management with the latest information to design a competitive marketing strategy and activities to target the mature market post-pandemic and over a long period.

Keywords: competitive advantage, covid-19, full-service hotel, five-star, luxury hotels

Procedia PDF Downloads 122
294 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach

Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman

Abstract:

Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.

Keywords: categorical data, log linear modeling, neural network, shifting cultivation

Procedia PDF Downloads 54
293 Prevalence of Occupational Asthma Diagnosed by Specific Challenge Test in 5 Different Working Environments in Thailand

Authors: Sawang Saenghirunvattana, Chao Saenghirunvattana, Maria Christina Gonzales, Wilai Srimuk, Chitchamai Siangpro, Kritsana Sutthisri

Abstract:

Introduction: Thailand is one of the fastest growing countries in Asia. It has emerged from agricultural to industrialized economy. Work places have shifted from farms to factories, offices and streets were employees are exposed to certain chemicals and pollutants causing occupational diseases particularly asthma. Work-related diseases are major concern and many studies have been published to demonstrate certain professions and their exposures that elevate the risk of asthma. Workers who exhibit coughing, wheezing and difficulty of breathing are brought to a health care setting where Pulmonary Function Test (PFT) is performed and based from results, they are then diagnosed of asthma. These patients, known to have occupational asthma eventually get well when removed from the exposure of the environment. Our study, focused on performing PFT or specific challenge test in diagnosing workers of occupational asthma with them executing the test within their workplace, maintaining the environment and their daily exposure to certain levels of chemicals and pollutants. This has provided us with an understanding and reliable diagnosis of occupational asthma. Objective: To identify the prevalence of Thai workers who develop asthma caused by exposure to pollutants and chemicals from their working environment by conducting interview and performing PFT or specific challenge test in their work places. Materials and Methods: This study was performed from January-March 2015 in Bangkok, Thailand. The percentage of abnormal symptoms of 940 workers in 5 different areas (factories of plastic, fertilizer, animal food, office and streets) were collected through a questionnaire. The demographic information, occupational history, and the state of health were determined using a questionnaire and checklists. PFT was executed in their work places and results were measured and evaluated. Results: Pulmonary Function test was performed by 940 participants. The specific challenge test was done in factories of plastic, fertilizer, animal food, office environment and on the streets of Thailand. Of the 100 participants working in the plastic industry, 65% complained of having respiratory symptoms. None of them had an abnormal PFT. From the participants who worked with fertilizers and are exposed to sulfur dioxide, out of 200 participants, 20% complained of having symptoms and 8% had abnormal PFT. The 300 subjects working with animal food reported that 45% complained of respiratory symptoms and 15% had abnormal PFT results. From the office environment where there is indoor pollution, Out of 140 subjects, 7% had symptoms and 4% had abnormal PFT. The 200 workers exposed to traffic pollution, 24% reported respiratory symptoms and 12% had abnormal PFT. Conclusion: We were able to identify and diagnose participants of occupational asthma through their abnormal lung function test done at their work places. The chemical agents and exposures were determined therefore effective management of workers with occupational asthma were advised to avoid further exposure for better chances of recovery. Further studies identifying the risk factors and causative agents of asthma in workplaces should be developed to encourage interventional strategies and programs that will prevent occupation related diseases particularly asthma.

Keywords: occupational asthma, pulmonary function test, specific challenge test, Thailand

Procedia PDF Downloads 304
292 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 40
291 Tracing Graduates of Vocational Schools with Transnational Mobility Experience: Conclusions and Recommendations from Poland

Authors: Michal Pachocki

Abstract:

This study investigates the effects of mobility in the context of a different environment and work culture through analysing the learners perception of their international work experience. Since this kind of professional training abroad is becoming more popular in Europe, mainly due to the EU funding opportunities, it is of paramount importance to assess its long-term impact on educational and career paths of former students. Moreover, the tracer study aimed at defining what professional, social and intercultural competencies were gained or developed by the interns and to which extent those competences proved to be useful meeting the labor market requirements. Being a populous EU member state which actively modernizes its vocational education system (also with European funds), Poland can serve as an illustrative case study to investigate the above described research problems. However, the examined processes are most certainly universal, wherever mobility is included in the learning process. The target group of this research was the former mobility participants and the study was conducted using quantitative and qualitative methods, such as the online survey with over 2 600 questionnaires completed by the former mobility participants; -individual in-depth interviews (IDIs) with 20 Polish graduates already present in the labour market; - 5 focus group interviews (FGIs) with 60 current students of the Polish vocational schools, who have recently returned from the training abroad. As the adopted methodology included a data triangulation, the collected findings have also been supplemented with data obtained by the desk research (mainly contextual information and statistical summary of mobility implementation). The results of this research – to be presented in full scope within the conference presentation – include the participants’ perception of their work mobility. The vast majority of graduates agrees that such an experience has had a significant impact on their professional careers and claims that they would recommend training abroad to persons who are about to enter the labor market. Moreover, in their view, such form of practical training going beyond formal education provided them with an opportunity to try their hand in the world of work. This allowed them – as they accounted for them – to get acquainted with a work system and context different from the ones experienced in Poland. Although the work mobility becomes an important element of the learning process in the growing number of Polish schools, this study reveals that many sending institutions suffer from a lack of the coherent strategy for planning domestic and foreign training programmes. Nevertheless, the significant number of graduates claims that such a synergy improves the quality of provided training. Despite that, the research proved that the transnational mobilities exert an impact on their future careers and personal development. However, such impact is, in their opinion, dependant on other factors, such as length of the training period, the nature and extent of work, recruitment criteria and the quality of organizational arrangement and mentoring provided to learners. This may indicate the salience of the sending and receiving institutions organizational capacity to deal with mobility.

Keywords: learning mobility, transnational training, vocational education and training graduates, tracer study

Procedia PDF Downloads 96
290 A Shift in Approach from Cereal Based Diet to Dietary Diversity in India: A Case Study of Aligarh District

Authors: Abha Gupta, Deepak K. Mishra

Abstract:

Food security issue in India has surrounded over availability and accessibility of cereal which is regarded as the only food group to check hunger and improve nutrition. Significance of fruits, vegetables, meat and other food products have totally been neglected given the fact that they provide essential nutrients to the body. There is a need to shift the emphasis from cereal-based approach to a more diverse diet so that aim of achieving food security may change from just reducing hunger to an overall health. This paper attempts to analyse how far dietary diversity level has been achieved across different socio-economic groups in India. For this purpose, present paper sets objectives to determine (a) percentage share of different food groups to total food expenditure and consumption by background characteristics (b) source of and preference for all food items and, (c) diversity of diet across socio-economic groups. A cross sectional survey covering 304 households selected through proportional stratified random sampling was conducted in six villages of Aligarh district of Uttar Pradesh, India. Information on amount of food consumed, source of consumption and expenditure on food (74 food items grouped into 10 major food groups) was collected with a recall period of seven days. Per capita per day food consumption/expenditure was calculated through dividing consumption/expenditure by household size and number seven. Food variety score was estimated by giving 0 values to those food groups/items which had not been eaten and 1 to those which had been taken by households in last seven days. Addition of all food group/item score gave result of food variety score. Diversity of diet was computed using Herfindahl-Hirschman index. Findings of the paper show that cereal, milk, roots and tuber food groups contribute a major share in total consumption/expenditure. Consumption of these food groups vary across socio-economic groups whereas fruit, vegetables, meat and other food consumption remain low and same. Estimation of dietary diversity show higher concentration of diet due to higher consumption of cereals, milk, root and tuber products and dietary diversity slightly varies across background groups. Muslims, Scheduled caste, small farmers, lower income class, food insecure, below poverty line and labour families show higher concentration of diet as compared to their counterpart groups. These groups also evince lower mean intake of number of food item in a week due to poor economic constraints and resultant lower accessibility to number of expensive food items. Results advocate to make a shift from cereal based diet to dietary diversity which not only includes cereal and milk products but also nutrition rich food items such as fruits, vegetables, meat and other products. Integrating a dietary diversity approach in food security programmes of the country would help to achieve nutrition security as hidden hunger is widespread among the Indian population.

Keywords: dietary diversity, food Security, India, socio-economic groups

Procedia PDF Downloads 340
289 Developing a GIS-Based Tool for the Management of Fats, Oils, and Grease (FOG): A Case Study of Thames Water Wastewater Catchment

Authors: Thomas D. Collin, Rachel Cunningham, Bruce Jefferson, Raffaella Villa

Abstract:

Fats, oils and grease (FOG) are by-products of food preparation and cooking processes. FOG enters wastewater systems through a variety of sources such as households, food service establishments, and industrial food facilities. Over time, if no source control is in place, FOG builds up on pipe walls, leading to blockages, and potentially to sewer overflows which are a major risk to the Environment and Human Health. UK water utilities spend millions of pounds annually trying to control FOG. Despite UK legislation specifying that discharge of such material is against the law, it is often complicated for water companies to identify and prosecute offenders. Hence, it leads to uncertainties regarding the attitude to take in terms of FOG management. Research is needed to seize the full potential of implementing current practices. The aim of this research was to undertake a comprehensive study to document the extent of FOG problems in sewer lines and reinforce existing knowledge. Data were collected to develop a model estimating quantities of FOG available for recovery within Thames Water wastewater catchments. Geographical Information System (GIS) software was used in conjunction to integrate data with a geographical component. FOG was responsible for at least 1/3 of sewer blockages in Thames Water waste area. A waste-based approach was developed through an extensive review to estimate the potential for FOG collection and recovery. Three main sources were identified: residential, commercial and industrial. Commercial properties were identified as one of the major FOG producers. The total potential FOG generated was estimated for the 354 wastewater catchments. Additionally, raw and settled sewage were sampled and analysed for FOG (as hexane extractable material) monthly at 20 sewage treatment works (STW) for three years. A good correlation was found with the sampled FOG and population equivalent (PE). On average, a difference of 43.03% was found between the estimated FOG (waste-based approach) and sampled FOG (raw sewage sampling). It was suggested that the approach undertaken could overestimate the FOG available, the sampling could only capture a fraction of FOG arriving at STW, and/or the difference could account for FOG accumulating in sewer lines. Furthermore, it was estimated that on average FOG could contribute up to 12.99% of the primary sludge removed. The model was further used to investigate the relationship between estimated FOG and number of blockages. The higher the FOG potential, the higher the number of FOG-related blockages is. The GIS-based tool was used to identify critical areas (i.e. high FOG potential and high number of FOG blockages). As reported in the literature, FOG was one of the main causes of sewer blockages. By identifying critical areas (i.e. high FOG potential and high number of FOG blockages) the model further explored the potential for source-control in terms of ‘sewer relief’ and waste recovery. Hence, it helped targeting where benefits from implementation of management strategies could be the highest. However, FOG is still likely to persist throughout the networks, and further research is needed to assess downstream impacts (i.e. at STW).

Keywords: fat, FOG, GIS, grease, oil, sewer blockages, sewer networks

Procedia PDF Downloads 209
288 The Influence of Perinatal Anxiety and Depression on Breastfeeding Behaviours: A Qualitative Systematic Review

Authors: Khulud Alhussain, Anna Gavine, Stephen Macgillivray, Sushila Chowdhry

Abstract:

Background: Estimates show that by the year 2030, mental illness will account for more than half of the global economic burden, second to non-communicable diseases. Often, the perinatal period is characterised by psychological ambivalence and a mixed anxiety-depressive condition. Maternal mental disorder is associated with perinatal anxiety and depression and affects breastfeeding behaviors. Studies also indicate that maternal mental health can considerably influence a baby's health in numerous aspects and impact the newborn health due to lack of adequate breastfeeding. However, studies reporting factors associated with breastfeeding behaviors are predominantly quantitative. Therefore, it is not clear what literature is available to understand the factors affecting breastfeeding and perinatal women’s perspectives and experiences. Aim: This review aimed to explore the perceptions and experiences of women with perinatal anxiety and depression, as well as how these experiences influence their breastfeeding behaviours. Methods: A systematic literature review of qualitative studies in line with the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ). Four electronic databases (CINAHL, PsycINFO, Embase, and Google Scholar) were explored for relevant studies using a search strategy. The search was restricted to studies published in the English language between 2000 and 2022. Findings from the literature were screened using a pre-defined screening criterion and the quality of eligible studies was appraised using the Walsh and Downe (2006) checklist. Findings were extracted and synthesised based on Braun and Clark. The review protocol was registered on PROSPERO (Ref: CRD42022319609). Result: A total of 4947 studies were identified from the four databases. Following duplicate removal and screening 16 studies met the inclusion criteria. The studies included 87 pregnant and 302 post-partum women from 12 countries. The participants were from a variety of economic, regional, and religious backgrounds, mainly from the age of 18 to 45 years old. Three main themes were identified: Barriers to breastfeeding, breastfeeding facilitators, emotional disturbance, and breastfeeding. Seven subthemes emerged from the data: expectation versus reality, uncertainly about maternal competencies, body image and breastfeeding, lack of sufficient breastfeeding support for family and caregivers’ support, influences positive breastfeeding practices, breastfeeding education, and causes of mental strain among breastfeeding women. Breastfeeding duration is affected in women with mental health disorders, irrespective of their desire to breastfeed. Conclusion: There is significant empirical evidence that breastfeeding behaviour and perinatal mental disturbance are linked. However, there is a lack of evidence to apply the findings to Saudi women due to lack of empirical qualitative information. To improve the psychological well-being of mothers, it is crucial to explore and recognise any concerns with their mental, physical, and emotional well-being. Therefore, robust research is needed so that breastfeeding intervention researchers and policymakers can focus on specifically what needs to be done to help mentally distressed perinatal women and their new-born.

Keywords: pregnancy, perinatal period, anxiety, depression, emotional disturbance, breastfeeding

Procedia PDF Downloads 98
287 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 136
286 Music Piracy Revisited: Agent-Based Modelling and Simulation of Illegal Consumption Behavior

Authors: U. S. Putro, L. Mayangsari, M. Siallagan, N. P. Tjahyani

Abstract:

National Collective Management Institute (LKMN) in Indonesia stated that legal music products were about 77.552.008 unit while illegal music products were about 22.0688.225 unit in 1996 and this number keeps getting worse every year. Consequently, Indonesia named as one of the countries with high piracy levels in 2005. This study models people decision toward unlawful behavior, music content piracy in particular, using agent-based modeling and simulation (ABMS). The classification of actors in the model constructed in this study are legal consumer, illegal consumer, and neutral consumer. The decision toward piracy among the actors is a manifestation of the social norm which attributes are social pressure, peer pressure, social approval, and perceived prevalence of piracy. The influencing attributes fluctuate depending on the majority of surrounding behavior called social network. There are two main interventions undertaken in the model, campaign and peer influence, which leads to scenarios in the simulation: positively-framed descriptive norm message, negatively-framed descriptive norm message, positively-framed injunctive norm with benefits message, and negatively-framed injunctive norm with costs message. Using NetLogo, the model is simulated in 30 runs with 10.000 iteration for each run. The initial number of agent was set 100 proportion of 95:5 for illegal consumption. The assumption of proportion is based on the data stated that 95% sales of music industry are pirated. The finding of this study is that negatively-framed descriptive norm message has a worse reversed effect toward music piracy. The study discovers that selecting the context-based campaign is the key process to reduce the level of intention toward music piracy as unlawful behavior by increasing the compliance awareness. The context of Indonesia reveals that that majority of people has actively engaged in music piracy as unlawful behavior, so that people think that this illegal act is common behavior. Therefore, providing the information about how widespread and big this problem is could make people do the illegal consumption behavior instead. The positively-framed descriptive norm message scenario works best to reduce music piracy numbers as it focuses on supporting positive behavior and subject to the right perception on this phenomenon. Music piracy is not merely economical, but rather social phenomenon due to the underlying motivation of the actors which has shifted toward community sharing. The indication of misconception of value co-creation in the context of music piracy in Indonesia is also discussed. This study contributes theoretically that understanding how social norm configures the behavior of decision-making process is essential to breakdown the phenomenon of unlawful behavior in music industry. In practice, this study proposes that reward-based and context-based strategy is the most relevant strategy for stakeholders in music industry. Furthermore, this study provides an opportunity that findings may generalize well beyond music piracy context. As an emerging body of work that systematically constructs the backstage of law and social affect decision-making process, it is interesting to see how the model is implemented in other decision-behavior related situation.

Keywords: music piracy, social norm, behavioral decision-making, agent-based model, value co-creation

Procedia PDF Downloads 187
285 Adapting Inclusive Residential Models to Match Universal Accessibility and Fire Protection

Authors: Patricia Huedo, Maria José Ruá, Raquel Agost-Felip

Abstract:

Ensuring sustainable development of urban environments means guaranteeing adequate environmental conditions, being resilient and meeting conditions of safety and inclusion for all people, regardless of their condition. All existing buildings should meet basic safety conditions and be equipped with safe and accessible routes, along with visual, acoustic and tactile signals to protect their users or potential visitors, and regardless of whether they undergo rehabilitation or change of use processes. Moreover, from a social perspective, we consider the need to prioritize buildings occupied by the most vulnerable groups of people that currently do not have specific regulations tailored to their needs. Some residential models in operation are not only outside the scope of application of the regulations in force; they also lack a project or technical data that would allow knowing the fire behavior of the construction materials. However, the difficulty and cost involved in adapting the entire building stock to current regulations can never justify the lack of safety for people. Hence, this work develops a simplified model to assess compliance with the basic safety conditions in case of fire and its compatibility with the specific accessibility needs of each user. The purpose is to support the designer in decision making, as well as to contribute to the development of a basic fire safety certification tool to be applied in inclusive residential models. This work has developed a methodology to support designers in adapting Social Services Centers, usually intended to vulnerable people. It incorporates a checklist of 9 items and information from sources or standards that designers can use to justify compliance or propose solutions. For each item, the verification system is justified, and possible sources of consultation are provided, considering the possibility of lacking technical documentation of construction systems or building materials. The procedure is based on diagnosing the degree of compliance with fire conditions of residential models used by vulnerable groups, considering the special accessibility conditions required by each user group. Through visual inspection and site surveying, the verification model can serve as a support tool, significantly streamlining the diagnostic phase and reducing the number of tests to be requested by over 75%. This speeds up and simplifies the diagnostic phase. To illustrate the methodology, two different buildings in the Valencian Region (Spain) have been selected. One case study is a mental health facility for residential purposes, located in a rural area, on the outskirts of a small town; the other one, is a day care facility for individuals with intellectual disabilities, located in a medium-sized city. The comparison between the case studies allow to validate the model in distinct conditions. Verifying compliance with a basic security level can allow a quality seal and a public register of buildings adapted to fire regulations to be established, similarly to what is being done with other types of attributes such as energy performance.

Keywords: fire safety, inclusive housing, universal accessibility, vulnerable people

Procedia PDF Downloads 22
284 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China

Authors: Chenyao Shen, Jie Shen

Abstract:

The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.

Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool

Procedia PDF Downloads 146
283 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 161
282 Discover Your Power: A Case for Contraceptive Self-Empowerment

Authors: Oluwaseun Adeleke, Samuel Ikan, Anthony Nwala, Mopelola Raji, Fidelis Edet

Abstract:

Background: The risks associated with each pregnancy is carried almost entirely by a woman; however, the decision about whether and when to get pregnant is a subject that several others contend with her to make. The self-care concept offers women of reproductive age the opportunity to take control of their health and its determinants with or without the influence of a healthcare provider, family, and friends. DMPA-SC Self-injection (SI) is becoming the cornerstone of contraceptive self-care and has the potential to expand access and create opportunities for women to take control of their reproductive health. Methodology: To obtain insight into the influences that interfere with a woman’s capacity to make contraceptive choices independently, the Delivering Innovations in Selfcare (DISC) project conducted two intensive rounds of qualitative data collection and triangulation that included provider, client, and community mobilizer interviews, facility observations, and routine program data collection. Respondents were sampled according to a convenience sampling approach and data collected analyzed using a codebook and Atlas-TI. The research team members came together for participatory analysis workshop to explore and interpret emergent themes. Findings: Insights indicate that women are increasingly finding their voice and independently seek services to prevent a deterioration of their economic situation and achieve personal ambitions. Women who hold independent decision-making power still prefer to share decision making power with their male partners. Male partners’ influence on women’s use of family planning and self-inject was most dominant. There were examples of men’s support for women’s use of contraception to prevent unintended pregnancy, as well as men withholding support. Other men outrightly deny their partners from obtaining contraceptive services and their partners cede this sexual and reproductive health right without objection. A woman’s decision to initiate family planning is affected by myths and misconceptions, many of which have cultural and religious origins. Some tribes are known for their reluctance to use contraception and often associate stigma with the pursuit of family planning (FP) services. Information given by the provider is accepted, and, in many cases, clients cede power to providers to shape their SI user journey. A provider’s influence on a client’s decision to self-inject is reinforced by their biases and concerns. Clients are inhibited by the presence of peers during group education at the health facility. Others are motivated to seek FP services by the interest expressed by peers. There is also a growing trend in the influence of social media on FP uptake, particularly Facebook fora. Conclusion: The convenience of self-administration at home is a benefit for those that contend with various forms of social influences as well as covert users. Beyond increasing choice and reducing barriers to accessing Sexual and Reproductive Health (SRH) services, it can initiate the process of self-discovery and agency in the contraceptive user journey.

Keywords: selfcare, self-empowerment, agency, DMPA-SC, contraception, family planning, influences

Procedia PDF Downloads 71
281 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 289
280 The Greek Revolution Through the Foreign Press. The Case of the Newspaper "The London Times" In the Period 1821-1828

Authors: Euripides Antoniades

Abstract:

In 1821 the Greek Revolution movement, under the political influence that arose from the French revolution, and the corresponding movements in Italy, Germany and America, requested the liberation of the nation and the establishment of an independent national state. Published topics in the British press regarding the Greek Revolution, focused on : a) the right of the Greeks to claim their freedom from Turkish domination in order to establish an independent state based on the principle of national autonomy, b) criticism regarding Turkish rule as illegal and the power of the Ottoman Sultan as arbitrary, c) the recognition of the Greek identity and its distinction from the Turkish one and d) the endorsement Greeks as the descendants of ancient Greeks. The advantage of newspaper as a media is sharing information and ideas and dealing with issues in greater depth and detail, unlike other media, such as radio or television. The London Times is a print publication that presents, in chronological or thematic order, the news, opinions or announcements about the most important events that have occurred in a place during a specified period of time. This paper employs the rich archive of The London Times archive by quoting extracts from publications of that period, to convey the British public perspective regarding the Greek Revolution from its beginning until the London Protocol of 1828. Furthermore, analyses the publications of the British newspaper in terms of the number of references to the Greek revolution, front page and editorial references as well as the size of publications on the revolution during the period 1821-1828. A combination of qualitative and quantitative content analysis was applied. An attempt was made to record Greek Revolution references along with the usage of specific words and expressions that contribute to the representation of the historical events and their exposure to the reading public. Key finds of this research reveal that a) there was a frequency of passionate daily articles concerning the events in Greece, their length, and context in The London Times, b) the British public opinion was influenced by this particular newspaper and c) the newspaper published various news about the revolution by adopting the role of animator of the Greek struggle. For instance, war events and the battles of Wallachin and Moldavia, Hydra, Crete, Psara, Mesollogi, Peloponnese were presented not only for informing the readers but for promoting the essential need for freedom and the establishment of an independent Greek state. In fact, this type of news was the main substance of the The London Times’ structure, establishing a positive image about the Greek Revolution contributing to the European diplomatic development such as the standpoint of France, - that did not wish to be detached from the conclusions regarding the English loans and the death of Alexander I of Russia and his succession by the ambitious Nicholas. These factors offered a change in the attitude of the British and Russians respectively assuming a positive approach towards Greece. The Great Powers maintained a neutral position in the Greek-Ottoman conflict, same time they engaged in Greek power increasement by offering aid.

Keywords: Greece, revolution, newspaper, the London times, London, great britain, mass media

Procedia PDF Downloads 90
279 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics

Authors: Jingsi Li, Neil S. Ferguson

Abstract:

Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.

Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management

Procedia PDF Downloads 112
278 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles

Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska

Abstract:

In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.

Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2

Procedia PDF Downloads 267
277 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
276 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students

Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara

Abstract:

BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to:  cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people  Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.

Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer

Procedia PDF Downloads 114
275 A Flipped Learning Experience in an Introductory Course of Information and Communication Technology in Two Bachelor's Degrees: Combining the Best of Online and Face-to-Face Teaching

Authors: Begona del Pino, Beatriz Prieto, Alberto Prieto

Abstract:

Two opposite approaches to teaching can be considered: in-class learning (teacher-oriented) versus virtual learning (student-oriented). The most known example of the latter is Massive Online Open Courses (MOOCs). Both methodologies have pros and cons. Nowadays there is an increasing trend towards combining both of them. Blending learning is considered a valuable tool for improving learning since it combines student-centred interactive e-learning and face to face instruction. The aim of this contribution is to exchange and share the experience and research results of a blended-learning project that took place in the University of Granada (Spain). The research objective was to prove how combining didactic resources of a MOOC with in-class teaching, interacting directly with students, can substantially improve academic results, as well as student acceptance. The proposed methodology is based on the use of flipped learning technics applied to the subject ‘Fundamentals of Computer Science’ of the first course of two degrees: Telecommunications Engineering, and Industrial Electronics. In this proposal, students acquire the theoretical knowledges at home through a MOOC platform, where they watch video-lectures, do self-evaluation tests, and use other academic multimedia online resources. Afterwards, they have to attend to in-class teaching where they do other activities in order to interact with teachers and the rest of students (discussing of the videos, solving of doubts and practical exercises, etc.), trying to overcome the disadvantages of self-regulated learning. The results are obtained through the grades of the students and their assessment of the blended experience, based on an opinion survey conducted at the end of the course. The major findings of the study are the following: The percentage of students passing the subject has grown from 53% (average from 2011 to 2014 using traditional learning methodology) to 76% (average from 2015 to 2018 using blended methodology). The average grade has improved from 5.20±1.99 to 6.38±1.66. The results of the opinion survey indicate that most students preferred blended methodology to traditional approaches, and positively valued both courses. In fact, 69% of students felt ‘quite’ or ‘very’ satisfied with the classroom activities; 65% of students preferred the flipped classroom methodology to traditional in-class lectures, and finally, 79% said they were ‘quite’ or ‘very’ satisfied with the course in general. The main conclusions of the experience are the improvement in academic results, as well as the highly satisfactory assessments obtained in the opinion surveys. The results confirm the huge potential of combining MOOCs in formal undergraduate studies with on-campus learning activities. Nevertheless, the results in terms of students’ participation and follow-up have a wide margin for improvement. The method is highly demanding for both students and teachers. As a recommendation, students must perform the assigned tasks with perseverance, every week, in order to take advantage of the face-to-face classes. This perseverance is precisely what needs to be promoted among students because it clearly brings about an improvement in learning.

Keywords: blended learning, educational paradigm, flipped classroom, flipped learning technologies, lessons learned, massive online open course, MOOC, teacher roles through technology

Procedia PDF Downloads 180
274 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 274
273 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 265
272 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology

Authors: Amarendar Reddy Addula

Abstract:

Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.

Keywords: artificial intelligence, ethics & human rights issues, laws, international laws

Procedia PDF Downloads 94
271 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 227
270 To Examine Perceptions and Associations of Shock Food Labelling and to Assess the Impact on Consumer Behaviour: A Quasi-Experimental Approach

Authors: Amy Heaps, Amy Burns, Una McMahon-Beattie

Abstract:

Shock and fear tactics have been used to encourage consumer behaviour change within the UK regarding lifestyle choices such as smoking and alcohol abuse, yet such measures have not been applied to food labels to encourage healthier purchasing decisions. Obesity levels are continuing to rise within the UK, despite efforts made by government and charitable bodies to encourage consumer behavioural changes, which will have a positive influence on their fat, salt, and sugar intake. We know that taking extreme measures to shock consumers into behavioural changes has worked previously; for example, the anti-smoking television adverts and new standardised cigarette and tobacco packaging have reduced the numbers of the UK adult population who smoke or encouraged those who are currently trying to quit. The USA has also introduced new front-of-pack labelling, which is clear, easy to read, and includes concise health warnings on products high in fat, salt, or sugar. This model has been successful, with consumers reducing purchases of products with these warning labels present. Therefore, investigating if shock labels would have an impact on UK consumer behaviour and purchasing decisions would help to fill the gap within this research field. This study aims to develop an understanding of consumer’s initial responses to shock advertising with an interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes and will achieve this through a mixed methodological approach taken with a sample size of 25 participants ages ranging from 22 and 60. Within this research, shock mock labels were developed, including a graphic image, health warning, and get-help information. These labels were made for products (available within the UK) with large market shares which were high in either fat, salt, or sugar. The use of online focus groups and mouse-tracking experiments results helped to develop an understanding of consumer’s initial responses to shock advertising with interest in the perceived impact of long-term effect shock advertising on consumer food purchasing decisions, behaviour, and attitudes. Preliminary results have shown that consumers believe that the use of graphic images, combined with a health warning, would encourage consumer behaviour change and influence their purchasing decisions regarding those products which are high in fat, salt and sugar. Preliminary main findings show that graphic mock shock labels may have an impact on consumer behaviour and purchasing decisions, which will, in turn, encourage healthier lifestyles. Focus group results show that 72% of participants indicated that these shock labels would have an impact on their purchasing decisions. During the mouse tracking trials, this increased to 80% of participants, showing that more exposure to shock labels may have a bigger impact on potential consumer behaviour and purchasing decision change. In conclusion, preliminary results indicate that graphic shock labels will impact consumer purchasing decisions. Findings allow for a deeper understanding of initial emotional responses to these graphic labels. However, more research is needed to test the longevity of these labels on consumer purchasing decisions, but this research exercise is demonstrably the foundation for future detailed work.

Keywords: consumer behavior, decision making, labelling legislation, purchasing decisions, shock advertising, shock labelling

Procedia PDF Downloads 67
269 Smart Services for Easy and Retrofittable Machine Data Collection

Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum

Abstract:

This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.

Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data

Procedia PDF Downloads 73
268 The Recorded Interaction Task: A Validation Study of a New Observational Tool to Assess Mother-Infant Bonding

Authors: Hannah Edwards, Femke T. A. Buisman-Pijlman, Adrian Esterman, Craig Phillips, Sandra Orgeig, Andrea Gordon

Abstract:

Mother-infant bonding is a term which refers to the early emotional connectedness between a mother and her infant. Strong mother-infant bonding promotes higher quality mother and infant interactions including prolonged breastfeeding, secure attachment and increased sensitive parenting and maternal responsiveness. Strengthening of all such interactions leads to improved social behavior, and emotional and cognitive development throughout childhood, adolescence and adulthood. The positive outcomes observed following strong mother-infant bonding emphasize the need to screen new mothers for disrupted mother-infant bonding, and in turn the need for a robust, valid tool to assess mother-infant bonding. A recent scoping review conducted by the research team identified four tools to assess mother-infant bonding, all of which employed self-rating scales. Thus, whilst these tools demonstrated both adequate validity and reliability, they rely on self-reported information from the mother. As such this may reflect a mother’s perception of bonding with their infant, rather than their actual behavior. Therefore, a new tool to assess mother-infant bonding has been developed. The Recorded Interaction Task (RIT) addresses shortcomings of previous tools by employing observational methods to assess bonding. The RIT focusses on the common interaction between mother and infant of changing a nappy, at the target age of 2-6 months, which is visually recorded and then later assessed. Thirteen maternal and seven infant behaviors are scored on the RIT Observation Scoring Sheet, and a final combined score of mother-infant bonding is determined. The aim of the current study was to assess the content validity and inter-rater reliability of the RIT. A panel of six experts with specialized expertise in bonding and infant behavior were consulted. Experts were provided with the RIT Observation Scoring Sheet, a visual recording of a nappy change interaction, and a feedback form. Experts scored the mother and infant interaction on the RIT Observation Scoring Sheet and completed the feedback form which collected their opinions on the validity of each item on the RIT Observation Scoring Sheet and the RIT as a whole. Twelve of the 20 items on the RIT Observation Scoring Sheet were scored ‘Valid’ by all (n=6) or most (n=5) experts. Two items received a ‘Not valid’ score from one expert. The remainder of the items received a mixture of ‘Valid’ and ‘Potentially Valid’ scores. Few changes were made to the RIT Observation Scoring Sheet following expert feedback, including rewording of items for clarity and the exclusion of an item focusing on behavior deemed not relevant for the target infant age. The overall ICC for single rater absolute agreement was 0.48 (95% CI 0.28 – 0.71). Experts (n=6) ratings were less consistent for infant behavior (ICC 0.27 (-0.01 – 0.82)) compared to mother behavior (ICC 0.55 (0.28 – 0.80)). Whilst previous tools employ self-report methods to assess mother-infant bonding, the RIT utilizes observational methods. The current study highlights adequate content validity and moderate inter-rater reliability of the RIT, supporting its use in future research. A convergent validity study comparing the RIT against an existing tool is currently being undertaken to confirm these results.

Keywords: content validity, inter-rater reliability, mother-infant bonding, observational tool, recorded interaction task

Procedia PDF Downloads 180