Search results for: test quality
942 Political Skills in Social Management and Responsibility of Media Studies
Authors: Musa Miah
Abstract:
Society and social activities are directly governed by political sociology. Political sociology has an impact on the whole of human society, the interrelationships of people in society, social responsibilities and duties, the nature of society, society and culture of different countries, conducting social activities, social change and social development. Through this, the correct knowledge and decision are made by analyzing the complexities of society in different ways. In modern civilized society, people need to get accurate knowledge about how they live, their behavior, customs and principles. The need for political sociology is undeniable, even if new plans are to be adopted for the development of society. The importance of practicing political sociology is immense if any country, nation, or society is to move forward on the path of sustainable development. Research has shown that political sociology is an essential aspect of the social impact of development, sociological analysis of poverty or underdevelopment, development of human values in individual life. The importance of political sociology for knowing the overall aspect of society is undeniable. Because, to know about social problems, to identify social problems, to find the cause of any social problem, one needs to know political sociology. Apart from knowing about the class structure of the society, people of different classes and professions live in the society. It is possible to gain knowledge about. He is also involved in various societies, communities and groups in the country and abroad. Therefore, research has shown that in order to successfully solve any task of the society, it is necessary to know the society in full. Media Studies: Media studies are directly related to socialization. Media strategy has had a positive impact on the management and direction of society. At present, Media Studies in Bangladesh is working towards providing opportunities for up-to-date and quality higher education. Introduced Department of Journalism, Communication and Media Studies in different universities of Bangladesh. The department has gained immense popularity since its inception. Here the top degree holders, as well as eminent editors, senior journalists, writers and researchers, are giving their opinions. Now there is ample scope for research in newspapers, magazines, radio, television and online media as well as outside of work as an advertising-documentary filmmaker or in domestic and foreign NGOs or other corporate organizations. According to the study, media studies have had a positive impact on the media in Bangladesh, especially television channels, the expansion and development of online media and the creation of clear ideas about communication, journalism and the media. Workshops, seminars and discussions are being held on contemporary national and international issues in addition to theoretical concepts. Journalism, communication and mass media are quite exceptional and challenging compared to the traditional subjects considering the present times. In this regard, there is a unique opportunity to build a modern society with taste and personality without mere employment.Keywords: Bangladesh, Dhaka, social activities, political sociology
Procedia PDF Downloads 150941 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy
Authors: M. N. Baig, F. N. Khan, M. Junaid
Abstract:
Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys
Procedia PDF Downloads 225940 The Quantum Theory of Music and Languages
Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, entanglement, langauge, science
Procedia PDF Downloads 79939 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby
Authors: Jazim Sohail, Filipe Teixeira-Dias
Abstract:
Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI
Procedia PDF Downloads 216938 A Comparative Study of the Tribological Behavior of Bilayer Coatings for Machine Protection
Authors: Cristina Diaz, Lucia Perez-Gandarillas, Gonzalo Garcia-Fuentes, Simone Visigalli, Roberto Canziani, Giuseppe Di Florio, Paolo Gronchi
Abstract:
During their lifetime, industrial machines are often subjected to chemical, mechanical and thermal extreme conditions. In some cases, the loss of efficiency comes from the degradation of the surface as a result of its exposition to abrasive environments that can cause wear. This is a common problem to be solved in industries of diverse nature such as food, paper or concrete industries, among others. For this reason, a good selection of the material is of high importance. In the machine design context, stainless steels such as AISI 304 and 316 are widely used. However, the severity of the external conditions can require additional protection for the steel and sometimes coating solutions are demanded in order to extend the lifespan of these materials. Therefore, the development of effective coatings with high wear resistance is of utmost technological relevance. In this research, bilayer coatings made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium, and Titanium-Zirconium have been developed using magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology. Their tribological behavior has been measured and evaluated under different environmental conditions. Two kinds of steels were used as substrates: AISI 304, AISI 316. For the comparison with these materials, titanium alloy substrate was also employed. Regarding the characterization, wear rate and friction coefficient were evaluated by a tribo-tester, using a pin-on-ball configuration with different lubricants such as tomato sauce, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl to approximate the results to real extreme conditions. In addition, topographical images of the wear tracks were obtained in order to get more insight of the wear behavior and scanning electron microscope (SEM) images were taken to evaluate the adhesion and quality of the coating. The characterization was completed with the measurement of nanoindentation hardness and elastic modulus. Concerning the results, thicknesses of the samples varied from 100 nm (Ti-Zr layer) to 1.4 µm (Ti-Hf layer) and SEM images confirmed that the addition of the Ti layer improved the adhesion of the coatings. Moreover, results have pointed out that these coatings have increased the wear resistance in comparison with the original substrates under environments of different severity. Furthermore, nanoindentation hardness results showed an improvement of the elastic strain to failure and a high modulus of elasticity (approximately 200 GPa). As a conclusion, Ti-Ta, Ti-Zr, Ti-Nb, and Ti-Hf are very promising and effective coatings in terms of tribological behavior, improving considerably the wear resistance and friction coefficient of typically used machine materials.Keywords: coating, stainless steel, tribology, wear
Procedia PDF Downloads 148937 Tuberculosis Outpatient Treatment in the Context of Reformation of the Health Care System
Authors: Danylo Brindak, Viktor Liashko, Olexander Chepurniy
Abstract:
Despite considerable experience in implementation of the best international approaches and services within response to epidemy of multi-drug resistant tuberculosis, the results of situation analysis indicate the presence of faults in this area. In 2014, Ukraine (for the first time) was included in the world’s five countries with the highest level of drug-resistant tuberculosis. The effectiveness of its treatment constitutes only 35% in the country. In this context, the increase in allocation of funds to control the epidemic of multidrug-resistant tuberculosis does not produce perceptible positive results. During 2001-2016, only the Global Fund to fight AIDS, Tuberculosis, and Malaria allocated to Ukraine more than USD 521,3 million for programs of tuberculosis and HIV/AIDS control. However, current conditions in post-Semashko system create little motivation for rational use of resources or cost control at inpatient TB facilities. There is no motivation to reduce overdue hospitalization and to target resources to priority sectors of modern tuberculosis control, including a model of care focused on the patient. In the presence of a line-item budget at medical institutions, based on the input factors as the ratios of beds and staff, there is a passive disposal of budgetary funds by health care institutions and their employees who have no motivation to improve quality and efficiency of service provision. Outpatient treatment of tuberculosis is being implemented in Ukraine since 2011 and has many risks, namely creation of parallel systems, low consistency through dependence on funding for the project, reduced the role of the family doctor, the fragmentation of financing, etc. In terms of reforming approaches to health system financing, which began in Ukraine in late 2016, NGO Infection Control in Ukraine conducted piloting of a new, motivating method of remuneration of employees in primary health care. The innovative aspect of this funding mechanism is cost according to results of treatment. The existing method of payment on the basis of the standard per inhabitant (per capita ratio) was added with motivating costs according to results of work. The effectiveness of such treatment of TB patients at the outpatient stage is 90%, while in whole on the basis of a current system the effectiveness of treatment of newly diagnosed pulmonary TB with positive swab is around 60% in the country. Even though Ukraine has 5.24 TB beds per 10 000 citizens. Implemented pilot model of ambulatory treatment will be used for the creation of costs system according to results of activities, the integration of TB and primary health and social services and their focus on achieving results, the reduction of inpatient treatment of tuberculosis.Keywords: health care reform, multi-drug resistant tuberculosis, outpatient treatment efficiency, tuberculosis
Procedia PDF Downloads 146936 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 70935 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)
Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen
Abstract:
Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity
Procedia PDF Downloads 124934 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies
Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour
Abstract:
The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop
Procedia PDF Downloads 138933 A Case Study Demonstrating the Benefits of Low-Carb Eating in an Adult with Latent Autoimmune Diabetes Highlights the Necessity and Effectiveness of These Dietary Therapies
Authors: Jasmeet Kaur, Anup Singh, Shashikant Iyengar, Arun Kumar, Ira Sahay
Abstract:
Latent autoimmune diabetes in adults (LADA) is an irreversible autoimmune disease that affects insulin production. LADA is characterized by the production of Glutamic acid decarboxylase (GAD) antibodies, which is similar to type 1 diabetes. Individuals with LADA may eventually develop overt diabetes and require insulin. In this condition, the pancreas produces little or no insulin, which is a hormone used by the body to allow glucose to enter cells and produce energy. While type 1 diabetes was traditionally associated with children and teenagers, its prevalence has increased in adults as well. LADA is frequently misdiagnosed as type 2 diabetes, especially in adulthood when type 2 diabetes is more common. LADA develops in adulthood, usually after age 30. Managing LADA involves metabolic control with exogenous insulin and prolonging the life of surviving beta cells, thereby slowing the disease's progression. This case study examines the impact of approximately 3 months of low-carbohydrate dietary intervention in a 42-year-old woman with LADA who was initially misdiagnosed as having type 2 diabetes. Her c-peptide was 0.13 and her HbA1c was 9.3% when this trial began. Low-carbohydrate interventions have been shown to improve blood sugar levels, including fasting, post-meal, and random blood sugar levels, as well as haemoglobin levels, blood pressure, energy levels, sleep quality, and satiety levels. The use of low-carbohydrate dietary intervention significantly reduces both hypo- and hyperglycaemia events. During the 3 months of the study, there were 2 to 3 hyperglycaemic events owing to physical stress and a single hypoglycaemic event. Low-carbohydrate dietary therapies lessen insulin dose inaccuracy, which explains why there were fewer hyperglycaemic and hypoglycaemic events. In three months, the glycated haemoglobin (HbA1c) level was reduced from 9.3% to 6.3%. These improvements occur without the need for caloric restriction or physical activity. Stress management was crucial aspect of the treatment plan as stress-induced neuroendocrine hormones can cause immunological dysregulation. Additionally, supplements that support immune system and reduce inflammation were used as part of the treatment during the trial. Long-term studies are needed to track disease development and corroborate the claim that such dietary treatments can prolong the honeymoon phase in LADA. Various factors can contribute to additional autoimmune attacks, so measuring c-peptide is crucial on a regular basis to determine whether insulin levels need to be adjusted.Keywords: autoimmune, diabetes, LADA, low_carb, nutrition
Procedia PDF Downloads 37932 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 146931 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 94930 Privacy Paradox and the Internet of Medical Things
Authors: Isabell Koinig, Sandra Diehl
Abstract:
In recent years, the health-care context has not been left unaffected by technological developments. In recent years, the Internet of Medical Things (IoMT)has not only led to a collaboration between disease management and advanced care coordination but also to more personalized health care and patient empowerment. With more than 40 % of all health technology being IoMT-related by 2020, questions regarding privacy become more prevalent, even more so during COVID-19when apps allowing for an intensive tracking of people’s whereabouts and their personal contacts cause privacy advocates to protest and revolt. There is a widespread tendency that even though users may express concerns and fears about their privacy, they behave in a manner that appears to contradict their statements by disclosing personal data. In literature, this phenomenon is discussed as a privacy paradox. While there are some studies investigating the privacy paradox in general, there is only scarce research related to the privacy paradox in the health sector and, to the authors’ knowledge, no empirical study investigating young people’s attitudes toward data security when using wearables and health apps. The empirical study presented in this paper tries to reduce this research gap by focusing on the area of digital and mobile health. It sets out to investigate the degree of importance individuals attribute to protecting their privacy and individual privacy protection strategies. Moreover, the question to which degree individuals between the ages of 20 and 30 years are willing to grant commercial parties access to their private data to use digital health services and apps are put to the test. To answer this research question, results from 6 focus groups with 40 participants will be presented. The focus was put on this age segment that has grown up in a digitally immersed environment. Moreover, it is particularly the young generation who is not only interested in health and fitness but also already uses health-supporting apps or gadgets. Approximately one-third of the study participants were students. Subjects were recruited in August and September 2019 by two trained researchers via email and were offered an incentive for their participation. Overall, results indicate that the young generation is well informed about the growing data collection and is quite critical of it; moreover, they possess knowledge of the potential side effects associated with this data collection. Most respondents indicated to cautiously handle their data and consider privacy as highly relevant, utilizing a number of protective strategies to ensure the confidentiality of their information. Their willingness to share information in exchange for services was only moderately pronounced, particularly in the health context, since health data was seen as valuable and sensitive. The majority of respondents indicated to rather miss out on using digital and mobile health offerings in order to maintain their privacy. While this behavior might be an unintended consequence, it is an important piece of information for app developers and medical providers, who have to find a way to find a user base for their products against the background of rising user privacy concerns.Keywords: digital health, privacy, privacy paradox, IoMT
Procedia PDF Downloads 136929 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 251928 Survey of Prevalence of Noise Induced Hearing Loss in Hawkers and Shopkeepers in Noisy Areas of Mumbai City
Authors: Hitesh Kshayap, Shantanu Arya, Ajay Basod, Sachin Sakhuja
Abstract:
This study was undertaken to measure the overall noise levels in different locations/zones and to estimate the prevalence of Noise induced hearing loss in Hawkers & Shopkeepers in Mumbai, India. The Hearing Test developed by American Academy Of Otolaryngology, translated from English to Hindi, and validated is used as a screening tool for hearing sensitivity was employed. The tool is having 14 items. Each item is scored on a scale 0, 1, 2 and 3. The score 6 and above indicated some difficulty or definite difficulty in hearing in daily activities and low score indicated lesser difficulty or normal hearing. The subjects who scored 6 or above or having tinnitus were made to undergo hearing evaluation by Pure tone audiometer. Further, the environmental noise levels were measured from Morning to Evening at road side at different Location/Hawking zones in Mumbai city using SLM9 Agronic 8928B & K type Digital Sound Level Meter) in dB (A). The maximum noise level of 100.0 dB (A) was recorded during evening hours from Chattrapati Shivaji Terminal to Colaba with overall noise level of 79.0 dB (A). However, the minimum noise level in this area was 72.6 dB (A) at any given point of time. Further, 54.6 dB (A) was recorded as minimum noise level during 8-9 am at Sion Circle. Further, commencement of flyovers with 2-tier traffic, sky walks, increasing number of vehicular traffic at road, high rise buildings and other commercial & urbanization activities in the Mumbai city most probably have resulted in increasing the overall environmental noise levels. Trees which acted as noise absorbers have been cut owing to rapid construction. The study involved 100 participants in the age range of 18 to 40 years of age, with the mean age of 29 years (S.D. =6.49). 46 participants having tinnitus or have obtained the score of 6 were made to undergo Pure Tone Audiometry and it was found that the prevalence rate of hearing loss in hawkers & shopkeepers is 19% (10% Hawkers and 9 % Shopkeepers). The results found indicates that 29 (42.6%) out of 64 Hawkers and 17 (47.2%) out of 36 Shopkeepers who underwent PTA had no significant difference in percentage of Noise Induced Hearing loss. The study results also reveal that participants who exhibited tinnitus 19 (41.30%) out of 46 were having mild to moderate sensorineural hearing loss between 3000Hz to 6000Hz. The Pure tone Audiogram pattern revealed Hearing loss at 4000 Hz and 6000 Hz while hearing at adjacent frequencies were nearly normal. 7 hawkers and 8 shopkeepers had mild notch while 3 hawkers and 1 shopkeeper had a moderate degree of notch. It is thus inferred that tinnitus is a strong indicator for presence of hearing loss and 4/6 KHz notch is a strong marker for road/traffic/ environmental noise as an occupational hazard for hawkers and shopkeepers. Mass awareness about these occupational hazards, regular hearing check up, early intervention along with sustainable development juxtaposed with social and urban forestry can help in this regard.Keywords: NIHL, noise, sound level meter, tinnitus
Procedia PDF Downloads 198927 Phylogenetic Analysis of Georgian Populations of Potato Cyst Nematodes Globodera Rostochiensis
Authors: Dali Gaganidze, Ekaterine Abashidze
Abstract:
Potato is one of the main agricultural crops in Georgia. Georgia produces early and late potato varieties in almost all regions. In traditional potato growing regions (Svaneti, Samckhet javaheti and Tsalka), the yield is higher than 30-35 t/ha. Among the plant pests that limit potato production and quality, the potato cyst nematodes (PCN) are harmful around the world. Yield losses caused by PCN are estimated up to 30%. Rout surveys conducted in two geographically distinct regions of Georgia producing potatoes - Samtskhe - Javakheti and Svaneti revealed potato cyst nematode Globodera rostochiensi. The aim of the study was the Phylogenetic analyses of Globodera rostochiensi revealed in Georgia by the amplification and sequencing of 28S gen in the D3 region and intergenic ITS1-15.8S-ITS2 region. Identification of all the samples from the two Globodera populations (Samtskhe - Javakheti and Svaneti), i.e., G. rostochiensis (20 isolates) were confirmed by conventional multiplex PCR with ITS 5 universal and PITSp4, PITSr3 specific primers of the cyst nematodes’ (G. pallida, G. rostochiensis). The size of PCR fragment 434 bp confirms that PCN samples from two populations, Samtskhe- Javakheti and Svaneti, belong to G. rostochiensi . The ITS1–5.8S-ITS2 regions were amplified using prime pairs: rDNA1 ( 5’ -TTGATTACGTCCCTGCCCTTT-3’ and rDNA2( 5’ TTTCACTCGCCGTTACTAAGG-3’), D3 expansion regions were amplified using primer pairs: D3A (5’ GACCCCTCTTGAAACACGGA-3’) and D3B (5’-TCGGAAGGAACCAGCTACTA-3’. PCR products of each region were cleaned up and sequenced using an ABI 3500xL Genetic Analyzer. Obtained sequencing results were analyzed by computer program BLASTN (https://blast.ncbi.nlm.nih.gov/Blast.cg). Phylogenetic analyses to resolve the relationships between the isolates were conducted in MEGA7 using both distance- and character-based methods. Based on analysis of G.rostochiensis isolate`s D3 expansion regions are grouped in three major clades (A, B and C) on the phylogenetic tree. Clade A is divided into three subclades; clade C is divided into two subclades. Isolates from the Samtckhet-javakheti population are in subclade 1 of clade A and isolates in subclade 1 of clade C. Isolates) from Svaneti populations are in subclade 2 of clade A and in clad B. In Clade C, subclade two is presented by three isolates from Svaneti and by one isolate (GL17) from Samckhet-Javakheti. . Based on analysis of G.rostochiensis isolate`s ITS1–5.8S-ITS2 regions are grouped in two main clades, the first contained 20 Georgian isolates of Globodera rostochiensis from Svaneti . The second clade contained 15 isolates of Globodera rostochiensis from Samckhet javakheti. Our investigation showed of high genetic variation of D3 and ITS1–5.8S-ITS2 region of rDNA of the isolates of G. rostochiensis from different geographic origins (Svameti, Samckhet-Javakheti) of Georgia. Acknowledgement: The research has been supported by the Shota Rustaveli National Scientific Foundation of Georgia : Project # FR17_235Keywords: globodera rostochiensi, PCR, phylogenetic tree, sequencing
Procedia PDF Downloads 194926 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 151925 Text Mining Past Medical History in Electrophysiological Studies
Authors: Roni Ramon-Gonen, Amir Dori, Shahar Shelly
Abstract:
Background and objectives: Healthcare professionals produce abundant textual information in their daily clinical practice. The extraction of insights from all the gathered information, mainly unstructured and lacking in normalization, is one of the major challenges in computational medicine. In this respect, text mining assembles different techniques to derive valuable insights from unstructured textual data, so it has led to being especially relevant in Medicine. Neurological patient’s history allows the clinician to define the patient’s symptoms and along with the result of the nerve conduction study (NCS) and electromyography (EMG) test, assists in formulating a differential diagnosis. Past medical history (PMH) helps to direct the latter. In this study, we aimed to identify relevant PMH, understand which PMHs are common among patients in the referral cohort and documented by the medical staff, and examine the differences by sex and age in a large cohort based on textual format notes. Methods: We retrospectively identified all patients with abnormal NCS between May 2016 to February 2022. Age, gender, and all NCS attributes reports were recorded, including the summary text. All patients’ histories were extracted from the text report by a query. Basic text cleansing and data preparation were performed, as well as lemmatization. Very popular words (like ‘left’ and ‘right’) were deleted. Several words were replaced with their abbreviations. A bag of words approach was used to perform the analyses. Different visualizations which are common in text analysis, were created to easily grasp the results. Results: We identified 5282 unique patients. Three thousand and five (57%) patients had documented PMH. Of which 60.4% (n=1817) were males. The total median age was 62 years (range 0.12 – 97.2 years), and the majority of patients (83%) presented after the age of forty years. The top two documented medical histories were diabetes mellitus (DM) and surgery. DM was observed in 16.3% of the patients, and surgery at 15.4%. Other frequent patient histories (among the top 20) were fracture, cancer (ca), motor vehicle accident (MVA), leg, lumbar, discopathy, back and carpal tunnel release (CTR). When separating the data by sex, we can see that DM and MVA are more frequent among males, while cancer and CTR are less frequent. On the other hand, the top medical history in females was surgery and, after that, DM. Other frequent histories among females are breast cancer, fractures, and CTR. In the younger population (ages 18 to 26), the frequent PMH were surgery, fractures, trauma, and MVA. Discussion: By applying text mining approaches to unstructured data, we were able to better understand which medical histories are more relevant in these circumstances and, in addition, gain additional insights regarding sex and age differences. These insights might help to collect epidemiological demographical data as well as raise new hypotheses. One limitation of this work is that each clinician might use different words or abbreviations to describe the same condition, and therefore using a coding system can be beneficial.Keywords: abnormal studies, healthcare analytics, medical history, nerve conduction studies, text mining, textual analysis
Procedia PDF Downloads 94924 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 106923 Experimental Study of Energy Absorption Efficiency (EAE) of Warp-Knitted Spacer Fabric Reinforced Foam (WKSFRF) Under Low-Velocity Impact
Authors: Amirhossein Dodankeh, Hadi Dabiryan, Saeed Hamze
Abstract:
Using fabrics to reinforce composites considerably leads to improved mechanical properties, including resistance to the impact load and the energy absorption of composites. Warp-knitted spacer fabrics (WKSF) are fabrics consisting of two layers of warp-knitted fabric connected by pile yarns. These connections create a space between the layers filled by pile yarns and give the fabric a three-dimensional shape. Today because of the unique properties of spacer fabrics, they are widely used in the transportation, construction, and sports industries. Polyurethane (PU) foams are commonly used as energy absorbers, but WKSF has much better properties in moisture transfer, compressive properties, and lower heat resistance than PU foam. It seems that the use of warp-knitted spacer fabric reinforced PU foam (WKSFRF) can lead to the production and use of composite, which has better properties in terms of energy absorption from the foam, its mold formation is enhanced, and its mechanical properties have been improved. In this paper, the energy absorption efficiency (EAE) of WKSFRF under low-velocity impact is investigated experimentally. The contribution of the effect of each of the structural parameters of the WKSF on the absorption of impact energy has also been investigated. For this purpose, WKSF with different structures such as two different thicknesses, small and large mesh sizes, and position of the meshes facing each other and not facing each other were produced. Then 6 types of composite samples with different structural parameters were fabricated. The physical properties of samples like weight per unit area and fiber volume fraction of composite were measured for 3 samples of any type of composites. Low-velocity impact with an initial energy of 5 J was carried out on 3 samples of any type of composite. The output of the low-velocity impact test is acceleration-time (A-T) graph with a lot deviation point, in order to achieve the appropriate results, these points were removed using the FILTFILT function of MATLAB R2018a. Using Newtonian laws of physics force-displacement (F-D) graph was drawn from an A-T graph. We know that the amount of energy absorbed is equal to the area under the F-D curve. Determination shows the maximum energy absorption is 2.858 J which is related to the samples reinforced with fabric with large mesh, high thickness, and not facing of the meshes relative to each other. An index called energy absorption efficiency was defined, which means absorption energy of any kind of our composite divided by its fiber volume fraction. With using this index, the best EAE between the samples is 21.6 that occurs in the sample with large mesh, high thickness, and meshes facing each other. Also, the EAE of this sample is 15.6% better than the average EAE of other composite samples. Generally, the energy absorption on average has been increased 21.2% by increasing the thickness, 9.5% by increasing the size of the meshes from small to big, and 47.3% by changing the position of the meshes from facing to non-facing.Keywords: composites, energy absorption efficiency, foam, geometrical parameters, low-velocity impact, warp-knitted spacer fabric
Procedia PDF Downloads 168922 Extra Skin Removal Surgery and Its Effects: A Comprehensive Review
Authors: Rebin Mzhda Mohammed, Hoshmand Ali Hama Agha
Abstract:
Excess skin, often consequential to substantial weight loss or the aging process, introduces physical discomfort, obstructs daily activities, and undermines an individual's self-esteem. As these challenges become increasingly prevalent, the need to explore viable solutions grows in significance. Extra skin removal surgery, colloquially known as body contouring surgery, has emerged as a compelling intervention to ameliorate the physical and psychological burdens of excess skin. This study undertakes a comprehensive review to illuminate the intricacies of extra skin removal surgery, encompassing its diverse procedures, associated risks, benefits, and psychological implications on patients. The methodological approach adopted involves a systematic and exhaustive review of pertinent scholarly literature sourced from reputable databases, including PubMed, Google Scholar, and specialized cosmetic surgery journals. Articles are meticulously curated based on their relevance, credibility, and recency. Subsequently, data from these sources are synthesized and categorized, facilitating a comprehensive understanding of the subject matter. Qualitative analysis serves to unravel the nuanced psychological effects, while quantitative data, where available, are harnessed to underpin the study's conclusions. In terms of major findings, the research underscores the manifold advantages of extra skin removal surgery. Patients experience a notable improvement in physical comfort, amplified mobility, enhanced self-confidence, and a newfound ability to don clothing comfortably. Nonetheless, the benefits are juxtaposed with potential risks, encompassing infection, scarring, hematoma, delayed healing, and the challenge of achieving symmetry. A salient discovery is the profound psychological impact of the surgery, as patients consistently report elevated body image satisfaction, heightened self-esteem, and a substantial enhancement in overall quality of life. In summation, this research accentuates the pivotal role of extra skin removal surgery in ameliorating the intricate interplay of physical and psychological difficulties posed by excess skin. By elucidating the diverse procedures, associated risks, and psychological outcomes, the study contributes to a comprehensive and informed comprehension of the surgery's multifaceted effects. Therefore, individuals contemplating this transformative surgical option are equipped with comprehensive insights, ultimately fostering informed decision-making, guided by the expertise of medical professionals.Keywords: extra skin removal surgery, body contouring, abdominoplasty, brachioplasty, thigh lift, body lift, benefits, risks, psychological effects
Procedia PDF Downloads 65921 Improving Teaching in English-Medium Instruction Classes at Japanese Universities through Needs-Based Professional Development Workshops
Authors: Todd Enslen
Abstract:
In order to attract more international students to study for undergraduate degrees in Japan, many universities have been developing English-Medium Instruction degree programs. This means that many faculty members must now teach their courses in English, which raises a number of concerns. A common misconception of English-Medium Instruction (EMI) is that teaching in English is simply a matter of translating materials. Since much of the teaching in Japan still relies on a more traditional, teachercentered, approach, continuing with this style in an EMI environment that targets international students can cause a clash between what is happening and what students expect in the classroom, not to mention what the Scholarship of Teaching and Learning (SoTL) has shown is effective teaching. A variety of considerations need to be taken into account in EMI classrooms such as varying English abilities of the students, modifying input material, and assuring comprehension through interactional checks. This paper analyzes the effectiveness of the English-Medium Instruction (EMI) undergraduate degree programs in engineering, agriculture, and science at a large research university in Japan by presenting the results from student surveys regarding the areas where perceived improvements need to be made. The students were the most dissatisfied with communication with their teachers in English, communication with Japanese students in English, adherence to only English being used in the classes, and the quality of the education they received. In addition, the results of a needs analysis survey of Japanese teachers having to teach in English showed that they believed they were most in need of English vocabulary and expressions to use in the classroom and teaching methods for teaching in English. The result from the student survey and the faculty survey show similar concerns between the two groups. By helping the teachers to understand student-centered teaching and the benefits for learning that it provides, teachers may begin to incorporate more student-centered approaches that in turn help to alleviate the dissatisfaction students are currently experiencing. Through analyzing the current environment in Japanese higher education against established best practices in teaching and EMI, three areas that need to be addressed in professional development workshops were identified. These were “culture” as it relates to the English language, “classroom management techniques” and ways to incorporate them into classes, and “language” issues. Materials used to help faculty better understand best practices as they relate to these specific areas will be provided to help practitioners begin the process of helping EMI faculty build awareness of better teaching practices. Finally, the results from faculty development workshops participants’ surveys will show the impact that these workshops can have. Almost all of the participants indicated that they learned something new and would like to incorporate the ideas from the workshop into their teaching. In addition, the vast majority of the participants felt the workshop provided them with new information, and they would like more workshops like these.Keywords: English-medium instruction, materials development, professional development, teaching effectiveness
Procedia PDF Downloads 88920 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 155919 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites
Authors: A. Kavita Murugkar, B. Anurag Kashyap
Abstract:
With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience
Procedia PDF Downloads 106918 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals
Authors: Christine F. Boos, Fernando M. Azevedo
Abstract:
Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing
Procedia PDF Downloads 528917 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 186916 Rainfall and Flood Forecast Models for Better Flood Relief Plan of the Mae Sot Municipality
Authors: S. Chuenchooklin, S. Taweepong, U. Pangnakorn
Abstract:
This research was conducted in the Mae Sot Watershed whereas located in the Moei River Basin at the Upper Salween River Basin in Tak Province, Thailand. The Mae Sot Municipality is the largest urbanized in Tak Province and situated in the midstream of the Mae Sot Watershed. It usually faces flash flood problem after heavy rain due to poor flood management has been reported since economic rapidly bloom up in recently years. Its catchment can be classified as ungauged basin with lack of rainfall data and no any stream gaging station was reported. It was attached by most severely flood event in 2013 as the worst studied case for those all communities in this municipality. Moreover, other problems are also faced in this watershed such shortage water supply for domestic consumption and agriculture utilizations including deterioration of water quality and landslide as well. The research aimed to increase capability building and strengthening the participation of those local community leaders and related agencies to conduct better water management in urban area was started by mean of the data collection and illustration of appropriated application of some short period rainfall forecasting model as the aim for better flood relief plan and management through the hydrologic model system and river analysis system programs. The authors intended to apply the global rainfall data via the integrated data viewer (IDV) program from the Unidata with the aim for rainfall forecasting in short period of 7 - 10 days in advance during rainy season instead of real time record. The IDV product can be present in advance period of rainfall with time step of 3 - 6 hours was introduced to the communities. The result can be used to input to either the hydrologic modeling system model (HEC-HMS) or the soil water assessment tool model (SWAT) for synthesizing flood hydrographs and use for flood forecasting as well. The authors applied the river analysis system model (HEC-RAS) to present flood flow behaviors in the reach of the Mae Sot stream via the downtown of the Mae Sot City as flood extents as water surface level at every cross-sectional profiles of the stream. Both models of HMS and RAS were tested in 2013 with observed rainfall and inflow-outflow data from the Mae Sot Dam. The result of HMS showed fit to the observed data at dam and applied at upstream boundary discharge to RAS in order to simulate flood extents and tested in the field, and the result found satisfied. The result of IDV’s rainfall forecast data was compared to observed data and found fair. However, it is an appropriate tool to use in the ungauged catchment to use with flood hydrograph and river analysis models for future efficient flood relief plan and management.Keywords: global rainfall, flood forecast, hydrologic modeling system, river analysis system
Procedia PDF Downloads 348915 Knowledge Management and Administrative Effectiveness of Non-teaching Staff in Federal Universities in the South-West, Nigeria
Authors: Nathaniel Oladimeji Dixon, Adekemi Dorcas Fadun
Abstract:
Educational managers have observed a downward trend in the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. This is evident in the low-quality service delivery of administrators and unaccomplished institutional goals and missions of higher education. Scholars have thus indicated the need for the deployment and adoption of a practice that encourages information collection and sharing among stakeholders with a view to improving service delivery and outcomes. This study examined the extent to which knowledge management correlated with the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. The study adopted the survey design. Three federal universities (the University of Ibadan, Federal University of Agriculture, Abeokuta, and Obafemi Awolowo University) were purposively selected because administrative ineffectiveness was more pronounced among non-teaching staff in government-owned universities, and these federal universities were long established. The proportional and stratified random sampling was adopted to select 1156 non-teaching staff across the three universities along the three existing layers of the non-teaching staff: secretarial (senior=311; junior=224), non-secretarial (senior=147; junior=241) and technicians (senior=130; junior=103). Knowledge Management Practices Questionnaire with four sub-scales: knowledge creation (α=0.72), knowledge utilization (α=0.76), knowledge sharing (α=0.79) and knowledge transfer (α=0.83); and Administrative Effectiveness Questionnaire with four sub-scales: communication (α=0.84), decision implementation (α=0.75), service delivery (α=0.81) and interpersonal relationship (α=0.78) were used for data collection. Data were analyzed using descriptive statistics, Pearson product-moment correlation and multiple regression at 0.05 level of significance, while qualitative data were content analyzed. About 59.8% of the non-teaching staff exhibited a low level of knowledge management. The indices of administrative effectiveness of non-teaching staff were rated as follows: service delivery (82.0%), communication (78.0%), decision implementation (71.0%) and interpersonal relationship (68.0%). Knowledge management had significant relationships with the indices of administrative effectiveness: service delivery (r=0.82), communication (r=0.81), decision implementation (r=0.80) and interpersonal relationship (r=0.47). Knowledge management had a significant joint prediction on administrative effectiveness (F (4;1151)= 0.79, R=0.86), accounting for 73.0% of its variance. Knowledge sharing (β=0.38), knowledge transfer (β=0.26), knowledge utilization (β=0.22), and knowledge creation (β=0.06) had relatively significant contributions to administrative effectiveness. Lack of team spirit and withdrawal syndrome is the major perceived constraints to knowledge management practices among the non-teaching staff. Knowledge management positively influenced the administrative effectiveness of the non-teaching staff in federal universities in South-west Nigeria. There is a need to ensure that the non-teaching staff imbibe team spirit and embrace teamwork with a view to eliminating their withdrawal syndromes. Besides, knowledge management practices should be deployed into the administrative procedures of the university system.Keywords: knowledge management, administrative effectiveness of non-teaching staff, federal universities in the south-west of nigeria., knowledge creation, knowledge utilization, effective communication, decision implementation
Procedia PDF Downloads 99914 The Impact of Physical Exercise on Gestational Diabetes and Maternal Weight Management: A Meta-Analysis
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Physiological changes during pregnancy, such as alterations in the circulatory, respiratory, and musculoskeletal systems, can negatively impact daily physical activity. This reduced activity is often associated with an increased risk of adverse maternal health outcomes, particularly gestational diabetes mellitus (GDM) and excessive weight gain. This meta-analysis aims to evaluate the effectiveness of structured physical exercise interventions during pregnancy in reducing the risk of GDM and managing maternal weight gain. A comprehensive search was conducted across six major databases: PubMed, Cochrane Library, EMBASE, Web of Science, ScienceDirect, and ClinicalTrials.gov, covering the period from database inception until 2023. Randomized controlled trials (RCTs) that explored the effects of physical exercise programs on pregnant women with low physical activity levels were included. The search was performed using EndNote and results were managed using RevMan (Review Manager) for meta-analysis. RCTs involving healthy pregnant women with low levels of physical activity or sedentary lifestyles were selected. These RCTs must have incorporated structured exercise programs during pregnancy and reported on outcomes related to GDM and maternal weight gain. From an initial pool of 5,112 articles, 65 RCTs (involving 11,400 pregnant women) met the inclusion criteria. Data extraction was performed, followed by a quality assessment of the selected studies using the Cochrane Risk of Bias tool. The meta-analysis was conducted using RevMan software, where pooled relative risks (RR) and weighted mean differences (WMD) were calculated using a random-effects model to address heterogeneity across studies. Sensitivity analyses, subgroup analyses (based on factors such as exercise intensity, duration, and pregnancy stage), and publication bias assessments were also conducted. Structured physical exercise during pregnancy led to a significant reduction in the risk of developing GDM (RR = 0.68; P < 0.001), particularly when the exercise program was performed throughout the pregnancy (RR = 0.62; P = 0.035). In addition, maternal weight gain was significantly reduced (WMD = −1.18 kg; 95% CI −1.54 to −0.85; P < 0.001). There were no significant adverse effects reported for either the mother or the neonate, confirming that exercise interventions are safe for both. This meta-analysis highlights the positive impact of regular moderate physical activity during pregnancy in reducing the risk of GDM and managing maternal weight gain. These findings suggest that physical exercise should be encouraged as a routine part of prenatal care. However, more research is required to refine exercise recommendations and determine the most effective interventions based on individual risk factors and pregnancy stages.Keywords: gestational diabetes, maternal weight management, meta-analysis, randomized controlled trials
Procedia PDF Downloads 6913 Collaboration between Dietician and Occupational Therapist, Promotes Independent Functional Eating in Tube Weaning Process of Mechanical Ventilated Patients
Authors: Inbal Zuriely, Yonit Weiss, Hilla Zaharoni, Hadas Lewkowicz, Tatiana Vander, Tarif Bader
Abstract:
early active movement, along with adjusting optimal nutrition, prevents aggravation of muscle degeneracy and functional decline. Eating is a basic activity of daily life, which reflects the patient's independence. When eating and feeding are experienced successfully, they lead to a sense of pleasure and satisfaction. However, when they are experienced as a difficulty, they might evoke feelings of helplessness and frustration. This stresses the essential process of gradual weaning off the enteral feeding tube. the work describes the collaboration of a dietitian, determining the nutritional needs of patients undergoing enteral tube weaning as part of the rehabilitation process, with the suited treatment of an occupational therapist. Occupational therapy intervention regarding eating capabilities focuses on improving the required motor and cognitive components, along with environmental adjustments and aids, imparting eating strategies and training to patients and their families. The project was conducted in the long-term, ventilated patients’ department at the Herzfeld Rehabilitation Geriatric Medical Center on patients undergoing enteral tube weaning with the staff’s assistance. Establishing continuous collaboration between the dietician and the occupational therapist, starting from the beginning of the feeding-tube weaning process: 1.The dietician updates the occupational therapist about the start of the process and the approved diet. 2.The occupational therapist performs cognitive, motor, and functional assessments and treatments regarding the patient’s eating capabilities and recommends the required adjustments for independent eating according to the FIM (Functional Independence Measure) scale. 3.The occupational therapist closely follows up on the patient’s degree of independence in eating and provides a repeated update to the dietician. 4.The dietician accordingly guides the ward staff on whether and how to feed the patient or allow independent eating. The project aimed to promote patients toward independent feeding, which leads to a sense of empowerment, enjoyment of the eating experience, and progress of functional ability, along with performing active movements that will motivate mobilization. From the beginning of 2022, 26 patients participated in the project. 79% of all patients who started the weaning process from tube feeding achieved different levels of independence in feeding (independence levels ranged from supervision (FIM-5) to complete independence (FIM-7). The integration of occupational therapy and dietary treatment is based on a patient-centered approach while considering the patient’s personal needs, preferences, and goals. This interdisciplinary partnership is essential for meeting the complex needs of prolonged mechanically ventilated patients and promotes independent functioning and quality of life.Keywords: dietary, mechanical ventilation, occupational therapy, tube feeding weaning
Procedia PDF Downloads 77