Search results for: new method (AGM)
424 Language Skills in the Emergent Literacy of Spanish-Speaking Children with Autism Spectrum Disorders
Authors: Adriana Salgado, Sandra Castaneda, Ivan Perez
Abstract:
Learning to read and write is a complex process involving several cognitive skills, contextual, and cultural environments. The basis of this development is linguistic skills, such as the ability to name and understand vocabulary, retell a story, phonological awareness, letter knowledge, among others. In children with autism spectrum disorder (ASD), one of the main concerns is related to language disorders. Nevertheless, most of the children with ASD are able to decode written information but have difficulties in reading comprehension. The research of these processes in the Spanish-speaking population is limited. However, the increasing prevalence of this diagnosis (1 in 115 children) in Mexico has implications at different levels. Educational research is an important area of interest in ASD children, such as emergent literacy. Reading and writing expand the possibilities of academic, cultural, and social information access. Taking this information into account, the objective of this research was to identify the relationship between language skills, alphabet knowledge, phonological awareness, and early reading and writing in ASD Spanish-speaking children. The method used for this research was based on tasks that were selected, adapted and in some cases designed to measure initial reading and writing, as well as language skills (naming, receptive vocabulary, and narrative skills), phonological awareness (similar phonological word pairs, beginning sound awareness and spelling) and letter knowledge, in a sample of 45 children (38 boys and 7 girls) with prior diagnosis of ASD. Descriptive analyses, as well as bivariate correlations, cluster analysis, and canonical correspondence, were obtained for the data results. Results showed that variability was large; however, it was possible to characterize the sample in low, medium, and high score groups regarding children performance. The low score group (46.7% of the sample), had a null or deficient performance in language skills and phonological awareness, some could identify up to five letters of the alphabet, showed no early reading skills but they could scribble. The middle score group was characterized by a highly variable performance in different tasks, with better language skills in receptive and naming vocabulary, some narrative, letter knowledge, and phonological awareness (beginning sound awareness) skills. The high score group, (24.4% of the sample) had the best performance in language skills in relation to the sample data, as well as in the rest of the measured skills. Finally, scores were canonically correlated between naming, receptive vocabulary, narrative, phonological awareness, letter knowledge and initial learning of reading and writing skills for the high score group and letter knowledge, naming and receptive vocabulary for the lower score group, which is consistent with previous research in typical and ASD children. In conclusion, the obtained data is consistent with previous studies. Despite large variability, it was possible to identify performance profiles and relations based on linguistic, phonological awareness, and letter knowledge skills. These skills were predictor variables of the initial development of reading and writing. The above has implications for a future program and strategies development that may benefit the acquisition of reading and writing in ASD children.Keywords: autism, autism spectrum disorders, early literacy, emergent literacy
Procedia PDF Downloads 144423 The Use of Telecare in the Re-design of Overnight Supports for People with Learning Disabilities: Implementing a Cluster-based Approach in North Ayrshire
Authors: Carly Nesvat, Dominic Jarrett, Colin Thomson, Wilma Coltart, Thelma Bowers, Jan Thomson
Abstract:
Introduction: Within Scotland, the Same As You strategy committed to moving people with learning disabilities out of long-stay hospital accommodation into homes in the community. Much of the focus of this movement was on the placement of people within individual homes. In order to achieve this, potentially excessive supports were put in place which created dependence, and carried significant ongoing cost primarily for local authorities. The greater focus on empowerment and community participation which has been evident in more recent learning disability strategy, along with the financial pressures being experienced across the public sector, created an imperative to re-examine that provision, particularly in relation to the use of expensive sleepover supports to individuals, and the potential for this to be appropriately scaled back through the use of telecare. Method: As part of a broader programme of redesigning overnight supports within North Ayrshire, a cluster of individuals living in close proximity were identified, who were in receipt of overnight supports, but who were identified as having the capacity to potentially benefit from their removal. In their place, a responder service was established (an individual staying overnight in a nearby service user’s home), and a variety of telecare solutions were placed within individual’s homes. Active and passive technology was connected to an Alarm Receiving Centre, which would alert the local responder service when necessary. Individuals and their families were prepared for the change, and continued to be informed about progress with the pilot. Results: 4 individuals, 2 of whom shared a tenancy, had their sleepover supports removed as part of the pilot. Extensive data collection in relation to alarm activation was combined with feedback from the 4 individuals, their families, and staff involved in their support. Varying perspectives emerged within the feedback. 3 of the individuals were clearly described as benefitting from the change, and the greater sense of independence it brought, while more concerns were evident in relation to the fourth. Some family members expressed a need for greater preparation in relation to the change and ongoing information provision. Some support staff also expressed a need for more information, to help them understand the new support arrangements for an individual, as well as noting concerns in relation to the outcomes for one participant. Conclusion: Developing a telecare response in relation to a cluster of individuals was facilitated by them all being supported by the same care provider. The number of similar clusters of individuals being identified within North Ayrshire is limited. Developing other solutions such as a response service for redesign will potentially require greater collaboration between different providers of home support, as well as continuing to explore the full range of telecare, including digital options. The pilot has highlighted the need for effective preparatory and ongoing engagement with staff and families, as well as the challenges which can accompany making changes to long-standing packages of support.Keywords: challenges, change, engagement, telecare
Procedia PDF Downloads 177422 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 149421 Development of an Improved Paradigm for the Tourism Sector in the Department of Huila, Colombia: A Theoretical and Empirical Approach
Authors: Laura N. Bolivar T.
Abstract:
The tourism importance for regional development is mainly highlighted by the collaborative, cooperating and competitive relationships of the involved agents. The fostering of associativity processes, in particular, the cluster approach emphasizes the beneficial outcomes from the concentration of enterprises, where innovation and entrepreneurship flourish and shape the dynamics for tourism empowerment. Considering the department of Huila, it is located in the south-west of Colombia and holds the biggest coffee production in the country, although it barely contributes to the national GDP. Hence, its economic development strategy is looking for more dynamism and Huila could be consolidated as a leading destination for cultural, ecological and heritage tourism, if at least the public policy making processes for the tourism management of La Tatacoa Desert, San Agustin Park and Bambuco’s National Festival, were implemented in a more efficient manner. In this order of ideas, this study attempts to address the potential restrictions and beneficial factors for the consolidation of the tourism sector of Huila-Colombia as a cluster and how could it impact its regional development. Therefore, a set of theoretical frameworks such as the Tourism Routes Approach, the Tourism Breeding Environment, the Community-based Tourism Method, among others, but also a collection of international experiences describing tourism clustering processes and most outstanding problematics, is analyzed to draw up learning points, structure of proceedings and success-driven factors to be contrasted with the local characteristics in Huila, as the region under study. This characterization involves primary and secondary information collection methods and comprises the South American and Colombian context together with the identification of involved actors and their roles, main interactions among them, major tourism products and their infrastructure, the visitors’ perspective on the situation and a recap of the related needs and benefits regarding the host community. Considering the umbrella concepts, the theoretical and the empirical approaches, and their comparison with the local specificities of the tourism sector in Huila, an array of shortcomings is analytically constructed and a series of guidelines are proposed as a way to overcome them and simultaneously, raise economic development and positively impact Huila’s well-being. This non-exhaustive bundle of guidelines is focused on fostering cooperating linkages in the actors’ network, dealing with Information and Communication Technologies’ innovations, reinforcing the supporting infrastructure, promoting the destinations considering the less known places as well, designing an information system enabling the tourism network to assess the situation based on reliable data, increasing competitiveness, developing participative public policy-making processes and empowering the host community about the touristic richness. According to this, cluster dynamics would drive the tourism sector to meet articulation and joint effort, then involved agents and local particularities would be adequately assisted to cope with the current changing environment of globalization and competition.Keywords: innovative strategy, local development, network of tourism actors, tourism cluster
Procedia PDF Downloads 141420 A Genetic Identification of Candida Species Causing Intravenous Catheter-Associated Candidemia in Heart Failure Patients
Authors: Seyed Reza Aghili, Tahereh Shokohi, Shirin Sadat Hashemi Fesharaki, Mohammad Ali Boroumand, Bahar Salmanian
Abstract:
Introduction: Intravenous catheter-associated fungal infection as nosocomial infection continue to be a deep problem among hospitalized patients, decreasing quality of life and adding healthcare costs. The capacity of catheters in the spread of candidemia in heart failure patients is obvious. The aim of this study was to evaluate the prevalence and genetic identification of Candida species in heart disorder patients. Material and Methods: This study was conducted in Tehran Hospital of Cardiology Center (Tehran, Iran, 2014) during 1.5 years on the patients hospitalized for at least 7 days and who had central or peripheral vein catheter. Culture of catheters, blood and skin of the location of catheter insertion were applied for detecting Candida colonies in 223 patients. Identification of Candida species was made on the basis of a combination of various phenotypic methods and confirmed by sequencing the ITS1-5.8S-ITS2 region amplified from the genomic DNA using PCR and the NCBI BLAST. Results: Of the 223 patients samples tested, we identified totally 15 Candida isolates obtained from 9 (4.04%) catheter cultures, 3 (1.35%) blood cultures and 2 (0.90%) skin cultures of the catheter insertion areas. On the base of ITS region sequencing, out of nine Candida isolates from catheter, 5(55.6%) C. albicans, 2(22.2%) C. glabrata, 1(11.1%) C. membranifiaciens and 1 (11.1%) C. tropicalis were identified. Among three Candida isolates from blood culture, C. tropicalis, C. carpophila and C. membranifiaciens were identified. Non-candida yeast isolated from one blood culture was Cryptococcus albidus. One case of C. glabrata and one case of Candida albicans were isolated from skin culture of the catheter insertion areas in patients with positive catheter culture. In these patients, ITS region of rDNA sequence showed a similarity between Candida isolated from the skin and catheter. However, the blood samples of these patients were negative for fungal growth. We report two cases of catheter-related candidemia caused by C. membranifiaciens and C. tropicalis on the base of genetic similarity of species isolated from blood and catheter which were treated successfully with intravenous fluconazole and catheter removal. In phenotypic identification methods, we could only identify C. albicans and C. tropicalis and other yeast isolates were diagnosed as Candida sp. Discussion: Although more than 200 species of Candida have been identified, only a few cause diseases in humans. There is some evidence that non-albicans infections are increasing. Many risk factors, including prior antibiotic therapy, use of a central venous catheter, surgery, and parenteral nutrition are considered to be associated with candidemia in hospitalized heart failure patients. Identifying the route of infection in candidemia is difficult. Non-albicans candida as the cause of candidemia is increasing dramatically. By using conventional method, many non-albicans isolates remain unidentified. So, using more sensitive and specific molecular genetic sequencing to clarify the aspects of epidemiology of the unknown candida species infections is essential. The positive blood and catheter cultures for candida isolates and high percentage of similarity of their ITS region of rDNA sequence in these two patients confirmed the diagnosis of intravenous catheter-associated candidemia.Keywords: catheter-associated infections, heart failure patient, molecular genetic sequencing, ITS region of rDNA, Candidemia
Procedia PDF Downloads 332419 The Effect of Photochemical Smog on Respiratory Health Patients in Abuja Nigeria
Authors: Christabel Ihedike, John Mooney, Monica Price
Abstract:
Summary: This study aims to critically evaluate effect of photochemical smog on respiratory health in Nigeria. Cohort of chronic obstructive pulmonary disease (COPD) patients was recruited from two large hospitals in Abuja Nigeria. Respiratory health questionnaires, daily diaries, dyspnoea scale and lung function measurement were used to obtain health data and investigate the relationship with air quality data (principally ozone, NOx and particulate pollution). Concentrations of air pollutants were higher than WHO and Nigerian air quality standard. The result suggests a correlation between measured air quality and exacerbation of respiratory illness. Introduction: Photochemical smog is a significant health challenge in most cities and its effect on respiratory health is well acknowledged. This type of pollution is most harmful to the elderly, children and those with underlying respiratory disease. This study aims to investigate impact of increasing temperature and photo-chemically generated secondary air pollutants on respiratory health in Abuja Nigeria. Method and Result: Health data was collected using spirometry to measure lung function on routine attendance at the clinic, daily diaries kept by patients and information obtained using respiratory questionnaire. Questionnaire responses (obtained using an adapted and internally validated version of St George’s Hospital Respiratory Questionnaire), shows that ‘time of wheeze’ showed an association with participants activities: 30% had worse wheeze in the morning: 10% cannot shop, 15% take long-time to get washed, 25% walk slower, 15% if hurry have to stop and 5% cannot take-bath. There was also a decrease in Forced expiratory volume in the first second and Forced Vital Capacity, and daily change in the afternoon–morning may be associated with the concentration level of pollutants. Also, dyspnoea symptoms recorded that 60% of patients were on grade 3, 25% grade 2 and 15% grade 1. Daily frequency of the number of patients in the cohort that cough /brought sputum is 78%. Air pollution in the city is higher than Nigerian and WHO standards with NOx and PM10 concentrations of 693.59ug/m-3 and 748ugm-3 being measured respectively. The result shows that air pollution may increase occurrence and exacerbation of respiratory disease. Conclusion: High temperature and local climatic conditions in urban Nigeria encourages formation of Ozone, the major constituent of photochemical smog, resulting also in the formation of secondary air pollutants associated with health challenges. In this study we confirm the likely potency of the pattern of secondary air pollution in exacerbating COPD symptoms in vulnerable patient group in urban Nigeria. There is need for better regulation and measures to reduce ozone, particularly when local climatic conditions favour development of photochemical smog in such settings. Climate change and likely increasing temperatures add impetus and urgency for better air quality standards and measures (traffic-restrictions and emissions standards) in developing world settings such as Nigeria.Keywords: Abuja-Nigeria, effect, photochemical smog, respiratory health
Procedia PDF Downloads 224418 The Senior Traveler Market as a Competitive Advantage for the Luxury Hotel Sector in the UK Post-Pandemic
Authors: Feyi Olorunshola
Abstract:
Over the last few years, the senior travel market has been noted for its potential in the wider tourism industry. The tourism sector includes the hotel and hospitality, travel, transportation, and several other subdivisions to make it economically viable. In particular, the hotel attracts a substantial part of the expenditure in tourism activities as when people plan to travel, suitable accommodation for relaxation, dining, entertainment and so on is paramount to their decision-making. The global retail value of the hotel as of 2018 was significant for tourism. But, despite indications of the hotel to the tourism industry at large, very few empirical studies are available to establish how this sector can leverage on the senior demographic to achieve competitive advantage. Predominantly, studies on the mature market have focused on destination tourism, with a limited investigation on the hotel which makes a significant contribution to tourism. Also, several scholarly studies have demonstrated the importance of the senior travel market to the hotel, yet there is very little empirical research in the field which has explored the driving factors that will become the accepted new normal for this niche segment post-pandemic. Giving that the hotel already operates in a highly saturated business environment, and on top of this pre-existing challenge, the ongoing global health outbreak has further put the sector in a vulnerable position. Therefore, the hotel especially the full-service luxury category must evolve rapidly for it to survive in the current business environment. The hotel can no longer rely on corporate travelers to generate higher revenue since the unprecedented wake of the pandemic in 2020 many organizations have invented a different approach of conducting their businesses online, therefore, the hotel needs to anticipate a significant drop in business travellers. However, the rooms and the rest of the facilities must be occupied to keep their business operating. The way forward for the hotel lies in the leisure sector, but the question now is to focus on the potential demographics of travelers, in this case, the seniors who have been repeatedly recognized as the lucrative market because of increase discretionary income, availability of time and the global population trends. To achieve the study objectives, a mixed-method approach will be utilized drawing on both qualitative (netnography) and quantitative (survey) methods, cognitive and decision-making theories (means-end chain) and competitive theories to identify the salient drivers explaining senior hotel choice and its influence on their decision-making. The target population are repeated seniors’ age 65 years and over who are UK resident, and from the top tourist market to the UK (USA, Germany, and France). Structural equation modelling will be employed to analyze the datasets. The theoretical implication is the development of new concepts using a robust research design, and as well as advancing existing framework to hotel study. Practically, it will provide the hotel management with the latest information to design a competitive marketing strategy and activities to target the mature market post-pandemic and over a long period.Keywords: competitive advantage, covid-19, full-service hotel, five-star, luxury hotels
Procedia PDF Downloads 122417 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 41416 Endometrial Biopsy Curettage vs Endometrial Aspiration: Better Modality in Female Genital Tuberculosis
Authors: Rupali Bhatia, Deepthi Nair, Geetika Khanna, Seema Singhal
Abstract:
Introduction: Genital tract tuberculosis is a chronic disease (caused by reactivation of organisms from systemic distribution of Mycobacterium tuberculosis) that often presents with low grade symptoms and non-specific complaints. Patients with genital tuberculosis are usually young women seeking workup and treatment for infertility. Infertility is the commonest presentation due to involvement of the fallopian tubes, endometrium and ovarian damage with poor ovarian volume and reserve. The diagnosis of genital tuberculosis is difficult because of the fact that it is a silent invader of genital tract. Since tissue cannot be obtained from fallopian tubes, the diagnosis is made by isolation of bacilli from endometrial tissue obtained by endometrial biopsy curettage and/or aspiration. Problems are associated with sampling technique as well as diagnostic modality due to lack of adequate sample volumes and the segregation of the sample for various diagnostic tests resulting in non-uniform distribution of microorganisms. Moreover, lack of an efficient sampling technique universally applicable for all specific diagnostic tests contributes to the diagnostic challenges. Endometrial sampling plays a key role in accurate diagnosis of female genital tuberculosis. It may be done by 2 methods viz. endometrial curettage and endometrial aspiration. Both endometrial curettage and aspirate have their own limitations as curettage picks up strip of the endometrium from one of the walls of the uterine cavity including tubal osteal areas whereas aspirate obtains total tissue with exfoliated cells present in the secretory fluid of the endometrial cavity. Further, sparse and uneven distribution of the bacilli remains a major factor contributing to the limitations of the techniques. The sample that is obtained by either technique is subjected to histopathological examination, AFB staining, culture and PCR. Aim: Comparison of the sampling techniques viz. endometrial biopsy curettage and endometrial aspiration using different laboratory methods of histopathology, cytology, microbiology and molecular biology. Method: In a hospital based observational study, 75 Indian females suspected of genital tuberculosis were selected on the basis of inclusion criteria. The women underwent endometrial tissue sampling using Novaks biopsy curette and Karmans cannula. One part of the specimen obtained was sent in formalin solution for histopathological testing and another part was sent in normal saline for acid fast bacilli smear, culture and polymerase chain reaction. The results so obtained were correlated using coefficient of correlation and chi square test. Result: Concordance of results showed moderate agreement between both the sampling techniques. Among HPE, AFB and PCR, maximum sensitivity was observed for PCR, though the specificity was not as high as other techniques. Conclusion: Statistically no significant difference was observed between the results obtained by the two sampling techniques. Therefore, one may use either EA or EB to obtain endometrial samples and avoid multiple sampling as both the techniques are equally efficient in diagnosing genital tuberculosis by HPE, AFB, culture or PCR.Keywords: acid fast bacilli (AFB), histopatholgy examination (HPE), polymerase chain reaction (PCR), endometrial biopsy curettage
Procedia PDF Downloads 326415 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 249414 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases
Authors: Sandra Hartl, Meena Chavan
Abstract:
This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution
Procedia PDF Downloads 163413 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs
Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino
Abstract:
Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus
Procedia PDF Downloads 139412 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria
Authors: K. Bencherif, M. Bellifa
Abstract:
The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen
Procedia PDF Downloads 389411 Untangling the Greek Seafood Market: Authentication of Crustacean Products Using DNA-Barcoding Methodologies
Authors: Z. Giagkazoglou, D. Loukovitis, C. Gubili, A. Imsiridou
Abstract:
Along with the increase in human population, demand for seafood has increased. Despite the strict labeling regulations that exist for most marketed species in the European Union, seafood substitution remains a persistent global issue. Food fraud occurs when food products are traded in a false or misleading way. Mislabeling occurs when one species is substituted and traded under the name of another, and it can be intentional or unintentional. Crustaceans are one of the most regularly consumed seafood in Greece. Shrimps, prawns, lobsters, crayfish, and crabs are considered a delicacy and can be encountered in a variety of market presentations (fresh, frozen, pre-cooked, peeled, etc.). With most of the external traits removed, products as such are susceptible to species substitution. DNA barcoding has proven to be the most accurate method for the detection of fraudulent seafood products. To our best knowledge, the DNA barcoding methodology is used for the first time in Greece, in order to investigate the labeling practices for crustacean products available in the market. A total of 100 tissue samples were collected from various retailers and markets across four Greek cities. In an effort to cover the highest range of products possible, different market presentations were targeted (fresh, frozen and cooked). Genomic DNA was extracted using the DNeasy Blood & Tissue Kit, according to the manufacturer's instructions. The mitochondrial gene selected as the target region of the analysis was the cytochrome c oxidase subunit I (COI). PCR products were purified and sequenced using an ABI 3500 Genetic Analyzer. Sequences were manually checked and edited using BioEdit software and compared against the ones available in GenBank and BOLD databases. Statistical analyses were conducted in R and PAST software. For most samples, COI amplification was successful, and species-level identification was possible. The preliminary results estimate moderate mislabeling rates (25%) in the identified samples. Mislabeling was most commonly detected in fresh products, with 50% of the samples in this category labeled incorrectly. Overall, the mislabeling rates detected by our study probably relate to some degree of unintentional misidentification, and lack of knowledge surrounding the legal designations by both retailers and consumers. For some species of crustaceans (i.e. Squila mantis) the mislabeling appears to be also affected by the local labeling practices. Across Greece, S. mantis is sold in the market under two common names, but only one is recognized by the country's legislation, and therefore any mislabeling is probably not profit-motivated. However, the substitution of the speckled shrimp (Metapenaus monoceros) for the distinct, giant river prawn (Macrobranchium rosenbergii), is a clear example of deliberate fraudulent substitution, aiming for profit. To our best knowledge, no scientific study investigating substitution and mislabeling rates in crustaceans has been conducted in Greece. For a better understanding of Greece's seafood market, similar DNA barcoding studies in other regions with increased touristic importance (e.g., the Greek islands) should be conducted. Regardless, the expansion of the list of species-specific designations for crustaceans in the country is advised.Keywords: COI gene, food fraud, labelling control, molecular identification
Procedia PDF Downloads 67410 The Contemporary Format of E-Learning in Teaching Foreign Languages
Authors: Nataliya G. Olkhovik
Abstract:
Nowadays in the system of Russian higher medical education there have been undertaken initiatives that resulted in focusing on the resources of e-learning in teaching foreign languages. Obviously, the face-to-face communication in foreign languages bears much more advantages in terms of effectiveness in comparison with the potential of e-learning. Thus, we’ve faced the necessity of strengthening the capacity of e-learning via integration of active methods into the process of teaching foreign languages, such as project activity of students. Successful project activity of students should involve the following components: monitoring, control, methods of organizing the student’s activity in foreign languages, stimulating their interest in the chosen project, approaches to self-assessment and methods of raising their self-esteem. The contemporary methodology assumes the project as a specific method, which activates potential of a student’s cognitive function, emotional reaction, ability to work in the team, commitment, skills of cooperation and, consequently, their readiness to verbalize ideas, thoughts and attitudes. Verbal activity in the foreign language is a complex conception that consolidates both cognitive (involving speech) capacity and individual traits and attitudes such as initiative, empathy, devotion, responsibility etc. Once we organize the project activity by the means of e-learning within the ‘Foreign language’ discipline we have to take into consideration all mentioned above characteristics and work out an effective way to implement it into the teaching practice to boost its educational potential. We have integrated into the e-platform Moodle the module of project activity consisting of the following blocks of tasks that lead students to research, cooperate, strive to leadership, chase the goal and finally verbalize their intentions. Firstly, we introduce the project through activating self-activity of students by the tasks of the phase ‘Preparation of the project’: choose the topic and justify it; find out the problematic situation and its components; set the goals; create your team, choose the leader, distribute the roles in your team; make a written report on grounding the validity of your choices. Secondly, in the ‘Planning the project’ phase we ask students to represent the analysis of the problem in terms of reasons, ways and methods of solution and define the structure of their project (here students may choose oral or written presentation by drawing up the claim in the e-platform about their wish, whereas the teacher decides what form of presentation to prefer). Thirdly, the students have to design the visual aids, speech samples (functional phrases, introductory words, keywords, synonyms, opposites, attributive constructions) and then after checking, discussing and correcting with a teacher via the means of Moodle present it in front of the audience. And finally, we introduce the phase of self-reflection that aims to awake the inner desire of students to improve their verbal activity in a foreign language. As a result, by implementing the project activity into the e-platform and project activity, we try to widen the frameworks of a traditional lesson of foreign languages through tapping the potential of personal traits and attitudes of students.Keywords: active methods, e-learning, improving verbal activity in foreign languages, personal traits and attitudes
Procedia PDF Downloads 105409 Multi-Plane Wrist Movement: Pathomechanics and Design of a 3D-Printed Splint
Authors: Sigal Portnoy, Yael Kaufman-Cohen, Yafa Levanon
Abstract:
Introduction: Rehabilitation following wrist fractures often includes exercising flexion-extension movements with a dynamic splint. However, during daily activities, we combine most of our wrist movements with radial and ulnar deviations. Also, the multi-plane wrist motion, named the ‘dart throw motion’ (DTM), was found to be a more stable motion in healthy individuals, in term of the motion of the proximal carpal bones, compared with sagittal wrist motion. The aim of this study was therefore to explore the pathomechanics of the wrist in a common multi-plane movement pattern (DTM) and design a novel splint for rehabilitation following distal radius fractures. Methods: First, a multi-axis electro-goniometer was used to quantify the plane angle of motion of the dominant and non-dominant wrists during various activities, e.g. drinking from a glass of water and answering a phone in 43 healthy individuals. The following protocols were then implemented with a population following distal radius fracture. Two dynamic scans were performed, one of the sagittal wrist motion and DTM, in a 3T magnetic resonance imaging (MRI) device, bilaterally. The scaphoid and lunate carpal bones, as well as the surface of the distal radius, were manually-segmented in SolidWorks and the angles of motion of the scaphoid and lunate bones were calculated. Subsequently, a patient-specific splint was designed using 3D scans of the hand. The brace design comprises of a proximal attachment to the arm and a distal envelope of the palm. An axle with two wheels is attached to the proximal part. Two wires attach the proximal part with the medial-palmar and lateral-ventral aspects of the distal part: when the wrist extends, the first wire is released and the second wire is strained towards the radius. The opposite occurs when the wrist flexes. The splint was attached to the wrist using Velcro and constrained the wrist movement to the desired calculated multi-plane of motion. Results: No significant differences were found between the multi-plane angles of the dominant and non-dominant wrists. The most common daily activities occurred at a plane angle of approximately 20° to 45° from the sagittal plane and the MRI studies show individual angles of the plane of motion. The printed splint fitted the wrist of the subjects and constricted movement to the desired multi-plane of motion. Hooks were inserted on each part to allow the addition of springs or rubber bands for resistance training towards muscle strengthening in the rehabilitation setting. Conclusions: It has been hypothesized that activation of the wrist in a multi-plane movement pattern following distal radius fractures will accelerate the recovery of the patient. Our results show that this motion can be determined from either the dominant or non-dominant wrists. The design of the patient-specific dynamic splint is the first step towards assessing whether splinting to induce combined movement is beneficial to the rehabilitation process, compared to conventional treatment. The evaluation of the clinical benefits of this method, compared to conventional rehabilitation methods following wrist fracture, are a part of a PhD work, currently conducted by an occupational therapist.Keywords: distal radius fracture, rehabilitation, dynamic magnetic resonance imaging, dart throw motion
Procedia PDF Downloads 299408 Positioning Mama Mkubwa Indigenous Model into Social Work Practice through Alternative Child Care in Tanzania: Ubuntu Perspective
Authors: Johnas Buhori, Meinrad Haule Lembuka
Abstract:
Introduction: Social work expands its boundary to accommodate indigenous knowledge and practice for better competence and services. In Tanzania, Mama Mkubwa Mkubwa (MMM) (Mother’s elder sister) is an indigenous practice of alternative child care that represents other traditional practices across African societies known as Ubuntu practice. Ubuntu is African Humanism with values and approaches that are connected to the social work. MMM focuses on using the elder sister of a deceased mother or father, a trusted elder woman from the extended family or indigenous community to provide alternative care to an orphan or vulnerable child. In Ubuntu's perspective, it takes a whole village or community to raise a child, meaning that every person in the community is responsible for child care. Methodology: A desk review method guided by Ubuntu theory was applied to enrich the study. Findings: MMM resembles the Ubuntu ideal of traditional child protection of those in need as part of alternative child care throughout Tanzanian history. Social work practice, along with other formal alternative child care, was introduced in Tanzania during the colonial era in 1940s and socio-economic problems of 1980s affected the country’s formal social welfare system, and suddenly HIV/AIDS pandemic triggered the vulnerability of children and hampered the capacity of the formal sector to provide social welfare services, including alternative child care. For decades, AIDS has contributed to an influx of orphans and vulnerable children that facilitated the re-emerging of traditional alternative child care at the community level, including MMM. MMM strongly practiced in regions where the AIDS pandemic affected the community, like Njombe, Coastal region, Kagera, etc. Despite of existing challenges, MMM remained to be the remarkably alternative child care practiced in both rural and urban communities integrated with social welfare services. Tanzania envisions a traditional mechanism of family or community environment for alternative child care with the notion that sometimes institutionalization care fails to offer children all they need to become productive members of society, and later, it becomes difficult to reconnect in the society. Implications to Social Work: MMM is compatible with social work by using strengths perspectives; MMM reflects Ubuntu's perspective on the ground of humane social work, using humane methods to achieve human goals. MMM further demonstrates the connectedness of those who care and those cared for and the inextricable link between them as Ubuntu-inspired models of social work that view children from family, community, environmental, and spiritual perspectives. Conclusion: Social work and MMM are compatible at the micro and mezzo levels; thus, application of MMM can be applied in social work practice beyond Tanzania when properly designed and integrated into other systems. When MMM is applied in social work, alternative care has the potential to support not only children but also empower families and communities. Since MMM is a community-owned and voluntary base, it can relieve the government, social workers, and other formal sectors from the annual burden of cost in the provision of institutionalized alternative child care.Keywords: ubuntu, indigenous social work, african social work, ubuntu social work, child protection, child alternative care
Procedia PDF Downloads 67407 Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis
Authors: Kuo-Kai Lin, Po-Lun Chang
Abstract:
Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further.Keywords: automatic speech intelligent system for health care, primary caregiver, long-term hemodialysis, health care capabilities, health outcomes
Procedia PDF Downloads 110406 Scientific and Regulatory Challenges of Advanced Therapy Medicinal Products
Authors: Alaa Abdellatif, Gabrièle Breda
Abstract:
Background. Advanced therapy medicinal products (ATMPs) are innovative therapies that mainly target orphan diseases and high unmet medical needs. ATMP includes gene therapy medicinal products (GTMP), somatic cell therapy medicinal products (CTMP), and tissue-engineered therapies (TEP). Since legislation opened the way in 2007, 25 ATMPs have been approved in the EU, which is about the same amount as the U.S. Food and Drug Administration. However, not all of the ATMPs that have been approved have successfully reached the market and retained their approval. Objectives. We aim to understand all the factors limiting the market access to very promising therapies in a systemic approach, to be able to overcome these problems, in the future, with scientific, regulatory and commercial innovations. Further to recent reviews that focus either on specific countries, products, or dimensions, we will address all the challenges faced by ATMP development today. Methodology. We used mixed methods and a multi-level approach for data collection. First, we performed an updated academic literature review on ATMP development and their scientific and market access challenges (papers published between 2018 and April 2023). Second, we analyzed industry feedback from cell and gene therapy webinars and white papers published by providers and pharmaceutical industries. Finally, we established a comparative analysis of the regulatory guidelines published by EMA and the FDA for ATMP approval. Results: The main challenges in bringing these therapies to market are the high development costs. Developing ATMPs is expensive due to the need for specialized manufacturing processes. Furthermore, the regulatory pathways for ATMPs are often complex and can vary between countries, making it challenging to obtain approval and ensure compliance with different regulations. As a result of the high costs associated with ATMPs, challenges in obtaining reimbursement from healthcare payers lead to limited patient access to these treatments. ATMPs are often developed for orphan diseases, which means that the patient population is limited for clinical trials which can make it challenging to demonstrate their safety and efficacy. In addition, the complex manufacturing processes required for ATMPs can make it challenging to scale up production to meet demand, which can limit their availability and increase costs. Finally, ATMPs face safety and efficacy challenges: dangerous adverse events of these therapies like toxicity related to the use of viral vectors or cell therapy, starting material and donor-related aspects. Conclusion. As a result of our mixed method analysis, we found that ATMPs face a number of challenges in their development, regulatory approval, and commercialization and that addressing these challenges requires collaboration between industry, regulators, healthcare providers, and patient groups. This first analysis will help us to address, for each challenge, proper and innovative solution(s) in order to increase the number of ATMPs approved and reach the patientsKeywords: advanced therapy medicinal products (ATMPs), product development, market access, innovation
Procedia PDF Downloads 76405 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies
Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam
Abstract:
Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company
Procedia PDF Downloads 264404 Special Educational Needs Coordinators in England: Changemakers in Mainstream School Settings
Authors: Saneeya Qureshi
Abstract:
This paper reports doctoral research into the impact of Special Educational Needs Coordinators (SENCOs) on teachers in England, UK. Since 1994, it has been compulsory for all mainstream schools in the UK to have a SENCO who co-ordinates assessment and provision for supporting pupils with Special Educational Needs (SEN), helping teachers to develop and implement optimal SEN planning and resources. SENCOs’ roles have evolved as various policies continually redefined SEN provision, impacting their positioning within the school hierarchical structure. SENCOs in England are increasingly recognised as key members of school senior management teams. In this paper, It will be argued that despite issues around the transformative ‘professionalisation’ of their role, and subsequent conflict around boundaries and power relations, SENCOs enhance teachers’ abilities in terms of delivering optimal SEN provision. There is a significant international dimension to the issue: a similar role in respect of SEN management already exists in countries such as Ireland, Finland and Singapore, whilst in other countries, such as Italy and India, the introduction of a role similar to that of a SENCO is currently under discussion. The research question addressed is: do SENCOs enhance teachers’ abilities to be effective teachers of children with Special Educational Needs? The theoretical framework of the project is that of interpretivism, as it is acknowledged that there are contexts and realities are social constructions. The study applied a mixed method approach consisting of two phases. The first phase involved a purposive survey (n=42) of 223 primary school SENCOs, which enabled a deeper insight into SENCOs’ perceptions of their roles in relation to teachers. The second phase consisted of semi-structured interviews (n=36) of SENCOs, teachers and head teachers, in addition to school SEN-related documentation scrutiny. ‘Trustworthiness’ was accomplished through data and methodological triangulation, in addition to a rigorous process of coding and thematic analysis. The research was informed by an Ethical Code as per national guidelines. Research findings point to the evolutionary aspect of the SENCO role having engendered a culture of expectations amongst practitioners, as SENCOs transition from being ‘fixers’ to being ‘enablers’ of teachers. Outcomes indicate that SENCOs can empower teaching staff through the dissemination of specialist knowledge. However, there must be resources clearly identified for such dissemination to take place. It is imperative that both SENCOs and teachers alike address the issue of absolution of responsibility that arises when the ownership and accountability for the planning and implementation of SEN provision are not clarified so as to ensure the promotion of a positive school ethos around inclusive practices. Optimal outcomes through effective SEN interventions and teaching practices are positively correlated with the inclusion of teachers in the planning and execution of SEN provisions. An international audience can consider how the key findings are being manifest in a global context, with reference to their own educational settings. Research outcomes can aid the development of specific competencies needed to shape optimal inclusive educational settings in accordance with the official global priorities pertaining to inclusion.Keywords: inclusion, school professionals, school leadership, special educational needs (SEN), special educational needs coordinators (SENCOs)
Procedia PDF Downloads 194403 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel
Authors: Valentina Dolmova
Abstract:
What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology
Procedia PDF Downloads 154402 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 162401 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 246400 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 162399 Dynamics of Protest Mobilization and Rapid Demobilization in Post-2001 Afghanistan: Facing Enlightening Movement
Authors: Ali Aqa Mohammad Jawad
Abstract:
Taking a relational approach, this paper analyzes the causal mechanisms associated with successful mobilization and rapid demobilization of the Enlightening Movement in post-2001 Afghanistan. The movement emerged after the state-owned Da Afghan Bereshna Sherkat (DABS) decided to divert the route for the Turkmenistan-Uzbekistan-Tajikistan-Afghanistan-Pakistan (TUTAP) electricity project. The grid was initially planned to go through the Hazara-inhabited province of Bamiyan, according to Afghanistan’s Power Sector Master Plan. The reroute served as an aide-mémoire of historical subordination to other ethno-religious groups for the Hazara community. It was also perceived as deprivation from post-2001 development projects, financed by international aid. This torched the accumulated grievances, which then gave birth to the Enlightening Movement. The movement had a successful mobilization. However, it demobilized after losing much of its mobilizing capabilities through an amalgamation of external and internal relational factors. The successful mobilization yet rapid demobilization constitutes the puzzle of this paper. From the theoretical perspective, this paper is significant as it establishes the applicability of contentious politics theory to protest mobilizations that occurred in Afghanistan, a context-specific, characterized by ethnic politics. Both primary and secondary data are utilized to address the puzzle. As for the primary resources, media coverage, interviews, reports, public media statements of the movement, involved in contentious performances, and data from Social Networking Services (SNS) are used. The covered period is from 2001-2018. As for the secondary resources, published academic articles and books are used to give a historical account of contentious politics. For data analysis, a qualitative comparative historical method is utilized to uncover the causal mechanisms associated with successful mobilization and rapid demobilization of the Movement. In this pursuit, both mobilization and demobilization are considered as larger political processes that could be decomposed to constituent mechanisms. Enlightening Movement’s framing and campaigns are first studied to uncover the associated mechanisms. Then, to avoid introducing some ad hoc mechanisms, the recurrence of mechanisms is checked against another case. Mechanisms qualify as robust if they are “recurrent” in different episodes of contention. Checking the recurrence of causal mechanisms is vital as past contentious events tend to reinforce future events. The findings of this paper suggest that the public sphere in Afghanistan is drastically different from Western democracies known as the birthplace of social movements. In Western democracies, when institutional politics did not respond, movement organizers occupied the public sphere, undermining the legitimacy of the government. In Afghanistan, the public sphere is ethicized. Considering the inter- and intra-relational dynamics of ethnic groups in Afghanistan, the movement reduced to an erosive inter- and intra-ethnic conflict. This undermined the cohesiveness of the movement, which then kicked-off its demobilization process.Keywords: enlightening movement, contentious politics, mobilization, demobilization
Procedia PDF Downloads 194398 Effect of the Diverse Standardized Patient Simulation Cultural Competence Education Strategy on Nursing Students' Transcultural Self-Efficacy Perceptions
Authors: Eda Ozkara San
Abstract:
Nurse educators have been charged by several nursing organizations and accrediting bodies to provide innovative and evidence-based educational experiences, both didactic and clinical, to help students to develop the knowledge, skills, and attitudes needed to provide culturally competent nursing care to patients. Clinical simulation, which offers the opportunity for students to practice nursing skills in a risk-free, controlled environment and helps develop self-efficacy (confidence) within the nursing role. As one simulation method, the standardized patients (SPs) simulation helps educators to teach nursing students variety of skills in nursing, medicine, and other health professions. It can be a helpful tool for nurse educators to enhance cultural competence of nursing students. An alarming gap exists within the literature concerning the effectiveness of SP strategy to enhance cultural competence development of diverse student groups, who must work with patients from various backgrounds. This grant-supported, longitudinal, one-group, pretest and post-test educational intervention study aimed to examine the effect of the Diverse Standardized Patient Simulation (DSPS) cultural competence education strategy on students’ (n = 53) transcultural self-efficacy (TSE). The researcher-developed multidimensional DSPS strategy involved careful integration of transcultural nursing skills guided by the Cultural Competence and Confidence (CCC) model. As a carefully orchestrated teaching and learning strategy by specifically utilizing the SP pedagogy, the DSPS also followed international guidelines and standards for the design, implementation, evaluation, and SP training; and had content validity review. The DSPS strategy involved two simulation scenarios targeting underrepresented patient populations (Muslim immigrant woman with limited English proficiency and Irish-Italian American gay man with his partner (Puerto Rican) to be utilized in a second-semester, nine-credit, 15-week medical-surgical nursing course at an urban public US university. Five doctorally prepared content experts reviewed the DSPS strategy for content validity. The item-level content validity index (I-CVI) score was calculated between .80-1.0 on the evaluation forms. Jeffreys’ Transcultural Self-Efficacy Tool (TSET) was administered as a pretest and post-test to assess students’ changes in cognitive, practical, and affective dimensions of TSE. Results gained from this study support that the DSPS cultural competence education strategy assisted students to develop cultural competence and caused statistically significant changes (increase) in students’ TSE perceptions. Results also supported that all students, regardless of their background, benefit (and require) well designed cultural competence education strategies. The multidimensional DSPS strategy is found to be an effective way to foster nursing students’ cultural competence development. Step-by-step description of the DSPS provides an easy adaptation of this strategy with different student populations and settings.Keywords: cultural competence development, the cultural competence and confidence model, CCC model, educational intervention, transcultural self-efficacy, TSE, transcultural self-efficacy tool, TSET
Procedia PDF Downloads 149397 Analysis of Fish Preservation Methods for Traditional Fishermen Boat
Authors: Kusno Kamil, Andi Asni, Sungkono
Abstract:
According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.Keywords: fish, preservation, traditional, fishermen, boat
Procedia PDF Downloads 70396 The Evaluation of Child Maltreatment Severity and the Decision-Making Processes in the Child Protection System
Authors: Maria M. Calheiros, Carla Silva, Eunice Magalhães
Abstract:
Professionals working in child protection services (CPS) need to have common and clear criteria to identify cases of maltreatment and to differentiate levels of severity in order to determine when CPS intervention is required, its nature and urgency, and, in most countries, the service that will be in charge of the case (community or specialized CPS). Actually, decision-making process is complex in CPS, and, for that reason, such criteria are particularly important for who significantly contribute to that decision-making in child maltreatment cases. The main objective of this presentation is to describe the Maltreatment Severity Assessment Questionnaire (MSQ), specifically designed to be used by professionals in the CPS, which adopts a multidimensional approach and uses a scale of severity within subtypes. Specifically, we aim to provide evidence of validity and reliability of this tool, in order to improve the quality and validity of assessment processes and, consequently, the decision making in CPS. The total sample was composed of 1000 children and/or adolescents (51.1% boys), aged between 0 and 18 years old (M = 9.47; DP = 4.51). All the participants were referred to official institutions of the children and youth protective system. Children and adolescents maltreatment (abuse, neglect experiences and sexual abuse) were assessed with 21 items of the Maltreatment Severity Questionnaire (MSQ), by professionals of CPS. Each item (sub-type) was composed of four descriptors of increasing severity. Professionals rated the level of severity, using a 4-point scale (1= minimally severe; 2= moderately severe; 3= highly severe; 4= extremely severe). The construct validity of the Maltreatment Severity Questionnaire was assessed with a holdout method, performing an Exploratory Factor Analysis (EFA) followed by a Confirmatory Factor Analysis (CFA). The final solution comprised 18 items organized in three factors 47.3% of variance explained. ‘Physical neglect’ (eight items) was defined by parental omissions concerning the insurance and monitoring of the child’s physical well-being and health, namely in terms of clothing, hygiene, housing conditions and contextual environmental security. ‘Physical and Psychological Abuse’ (four items) described abusive physical and psychological actions, namely, coercive/punitive disciplinary methods, physically violent methods or verbal interactions that offend and denigrate the child, with the potential to disrupt psychological attributes (e.g., self-esteem). ‘Psychological neglect’ (six items) involved omissions related to children emotional development, mental health monitoring, school attendance, development needs, as well as inappropriate relationship patterns with attachment figures. Results indicated a good reliability of all the factors. The assessment of child maltreatment cases with MSQ could have a set of practical and research implications: a) It is a valid and reliable multidimensional instrument to measure child maltreatment, b) It is an instrument integrating the co-occurrence of various types of maltreatment and a within-subtypes scale of severity; c) Specifically designed for professionals, it may assist them in decision-making processes; d) More than using case file reports to evaluate maltreatment experiences, researchers could guide more appropriately their research about determinants and consequences of maltreatment.Keywords: assessment, maltreatment, children and youth, decision-making
Procedia PDF Downloads 290395 The Perceptions of Patients with Osteoarthritis at a Public Community Rehabilitation Centre in the Cape Metropole for Using Digital Technology in Rehabilitation
Authors: Gabriela Prins, Quinette Louw, Dawn Ernstzen
Abstract:
Background: Access to rehabilitation services is a major challenge globally, especially in low-and-middle income countries (LMICs) where resources and infrastructure are extremely limited. Telerehabilitation (TR) has emerged in recent decades as a highly promising method to dramatically expand accessibility to rehabilitation services globally. TR provides rehabilitation care remotely using communication technologies such as video conferencing, smartphones, and internet-connected devices. This boosts accessibility to underprivileged regions and allows for greater flexibility for patients. Despite this, TR is hindered by several factors, including limited technological resources, high costs, lack of digital access, and the unavailability of healthcare systems, which are major barriers to widespread adoption among LMIC patients. These barriers have collectively hindered the potential implementation and adoption of TR services in LMICs healthcare settings. Adoption of TR will also require the buy-in of end users and limited information is known on the perspectives of the SA population. Aim: The study aimed to understand patients' perspectives regarding the use of digital technology as part of their OA rehabilitation at a public community healthcare centre in the Cape Metropole Area. Methods: A qualitative descriptive study design was used on 10 OA patients from a public community rehabilitation centre in South Africa. Data collection included semi-structured interviews and patient-reported outcome measures (PSFS, ASES-8, and EuroQol EQ-5D-5L) on functioning and quality of life. Transcribed interview data were coded in Atlas.ti. 22.2 and analyzed using thematic analysis. The results were narratively documented. Results: Four themes arose from the interviews. The themes were Telerehabilitation awareness (Use of Digital Technology Information Sources and Prior Experience with Technology /TR), Telerehabilitation Benefits (Access to healthcare providers, Access to educational information, Convenience, Time and Resource Efficiency and Facilitating Family Involvement), Telerehabilitation Implementation Considerations (Openness towards TR Implementation, Learning about TR and Technology, Therapeutic relationship, and Privacy) and Future use of Telerehabilitation (Personal Preference and TR for the next generation). The ten participants demonstrated limited awareness and exposure to TR, as well as minimal digital literacy and skills. Skepticism was shown when comparing the effectiveness of TR to in-person rehabilitation and valued physical interactions with health professionals. However, some recognized potential benefits of TR for accessibility, convenience, family involvement and improving community health in the long term. Willingness existed to try TR with sufficient training. Conclusion: With targeted efforts addressing identified barriers around awareness, technological literacy, clinician readiness and resource availability, perspectives on TR may shift positively from uncertainty towards endorsement of this expanding approach for simpler rehabilitation access in LMICs.Keywords: digital technology, osteoarthritis, primary health care, telerehabilitation
Procedia PDF Downloads 77