Search results for: computer use
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2361

Search results for: computer use

171 The Role of High Schools in Saudi Arabia in Supporting Young Adults with Intellectual Disabilities with Their Transition to Post-secondary Education

Authors: Sohil I. Alqazlan

Abstract:

Introduction and Objectives: There is limited research focusing on young adults with intellectual disabilities (ID) and their experiences after finishing compulsory education, especially in the Middle Eastern/Arab countries. This paper aims to further understand the lives of young adults with ID in Riyadh [the capital city of Saudi Arabia], particularly as they go on to access Post-Secondary Education [PSE]. As part of this study, it is important to understand the roles of high schools in Riyadh in terms of preparing their students for post-school life. To achieve this, the researcher has asked Saudi Arabia’s Ministry of Education to provide student transition plans (TPs) for post-school opportunities. However, and unfortunately, high schools in Riyadh do not use transition plans for their students. Therefore, the researcher has requested individual education plans (IEPs) for students with ID in their final year at high school to find the type of support the students had regarding both their long- and short-term goals that might help them access PSE or the labour market. Methods: The researcher analysed 10 IEPs of students in their final year at high school. To achieve the aim of the study, the researcher compared these IEPs with expectations set out in the official IEP framework of the MoE in Saudi Arabia, such as collaboration on the IEP sample and the focus on adult life. By analysing the students’ IEPs in terms of various goals, this study attempts to highlight skills that might offer students more independence after finishing compulsory education and going on to PSE. Results: Unfortunately, communication between IEP team members proved persistently absent in the sample. This was clear from the fact that none of the team members, apart from the SEN teachers, had signed any of the IEPs. Thus, none of the daily or weekly goals outlined were sent to parents to review at home. As a result of this, there were no goals in the IEPs that clearly referred to PSE. However, some long-term goals were set which might help those with ID become more independent in their adult life. For example, in the IEPs, which dealt with computer skills, the student had goals related to using Microsoft Word. Finally, just one goal of these IEPs set an important independent skill for the young adults with ID: “the student will learn how to use public transportation”. Conclusions: From analysing the ten IEPs, it was clear that SEN teachers in Riyadh schools were working without any help from other professionals. The students with ID, as well as their families, were not consulted on their views on important goals. Therefore, more work needs to be done with the students regarding their transition to PSE, perhaps by building partnerships between high schools and potential PSE institutions. Finally, more PSE programmes and a higher level of employer awareness could help create a bridge for students transferring from high school to PSE. Schools could also focus their IEP goals towards specific PSE programmes the student might attend, which could increase their chances of success.

Keywords: high school, post-secondary education, PSE, students with intellectual disabilities

Procedia PDF Downloads 169
170 Analyzing Consumer Preferences and Brand Differentiation in the Notebook Market via Social Media Insights and Expert Evaluations

Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari

Abstract:

This study investigates consumer behavior in the notebook computer market by integrating social media sentiment analysis with expert evaluations. The rapid evolution of the notebook industry has intensified competition among manufacturers, necessitating a deeper understanding of consumer priorities. Social media platforms, particularly Twitter, have become valuable sources for capturing real-time user feedback. In this research, sentiment analysis was performed on Twitter data gathered in the last two years, focusing on seven major notebook brands. The PyABSA framework was utilized to extract sentiments associated with various notebook components, including performance, design, battery life, and price. Expert evaluations, conducted using fuzzy logic, were incorporated to assess the impact of these sentiments on purchase behavior. To provide actionable insights, the TOPSIS method was employed to prioritize notebook features based on a combination of consumer sentiments and expert opinions. The findings consistently highlight price, display quality, and core performance components, such as RAM and CPU, as top priorities across brands. However, lower-priority features, such as webcams and cooling fans, present opportunities for manufacturers to innovate and differentiate their products. The analysis also reveals subtle but significant brand-specific variations, offering targeted insights for marketing and product development strategies. For example, Lenovo's strong performance in display quality points to a competitive edge, while Microsoft's lower ranking in battery life indicates a potential area for R&D investment. This hybrid methodology demonstrates the value of combining big data analytics with expert evaluations, offering a comprehensive framework for understanding consumer behavior in the notebook market. The study emphasizes the importance of aligning product development and marketing strategies with evolving consumer preferences, ensuring competitiveness in a dynamic market. It also underscores the potential for innovation in seemingly less important features, providing companies with opportunities to create unique selling points. By bridging the gap between consumer expectations and product offerings, this research equips manufacturers with the tools needed to remain agile in responding to market trends and enhancing customer satisfaction.

Keywords: consumer behavior, customer preferences, laptop industry, notebook computers, social media analytics, TOPSIS

Procedia PDF Downloads 24
169 Pattern of Deliberate Self-Harm Repetition in Rural Sri Lanka

Authors: P. H. G. J. Pushpakumara, Andrew Dawson

Abstract:

Introduction: Deliberate self harm (DSH) is a major public health problem globally. Suicide rates of Sri Lanka are being among the highest national rates in the world, since 1950. Previous DSH is the most important independent predictor of repetition. The estimated 1 year non-fatal repeat self-harm rate was 16.3%. Asian countries had considerably lower rate, 10.0%. Objectives: To calculate incidence of deliberate self-poisoning (DSP) and suicides, repetition rate of DSP in Kurunegala District (KD). To determine the pattern of repeated DSP in KD. Methods: Study had two components. In the first component, demographic and event related details of, DSP admission in 46 hospitals and suicides in 28 police stations of KD were collected for 3 years from January 2011. Demographic details of cohort of DSP patients admitted to above hospitals in 2011 were linked with hospital admissions and police records of next two years period from the index admission. Records were screened for links with high sensitivity using the computer then did manual matching which would have been much more specific. In the second component, randomly selected DSP patients (n=438), who admitted to main referral centre which receives 60% of DSP cases of the district, were interviewed to assess life-time repetition. Results: There were 16,993 DSP admissions and 1078 suicides for the three year period. Suicide incidences in KD were, 21.6, 20.7 and 24.3 per 100,000 population in 2011, 2012 and 2013. Average male to female ratio for suicide incidences was 5.5. DSP incidences were 205.4, 248.3 and 202.5 per 100,000 population. Male incidences were slightly greater than the female incidences, male: female ratio was 1.1:1. Highest age standardized male and female incidence was reported in 20-24 years age group, 769.6/100,000, and 15-19 years age group 1304.0/100,000. Male to female ratio of the incidence increased with the age. There were 318 (179 male and 139 female) patients attempted DSH within two years. Female repetitive patients were ounger compared to the males, p < 0.0001, median age: males 28 and females 19 years. 290 (91.2%) had only one repetitive attempt, 24 (7.5%) had two, 3 (0.9%) had three and one (0.3%) had four in that period. One year repetition rate was 5.6 and two year repetition rate was 7.9%. Average intervals between indexed events and first repetitive DSP events were 246.8 (SD:223.4) and 238.5 (SD:207.0) days among males and females. One fifth of first repetitive events occurred within first two weeks in both males and females. Around 50% of males and females had the second event within 28 weeks. Within the first year of the indexed event, around 70% had the second event. First repetitive event was fatal for 28 (8.8%) individuals. Ages of those who died, mean 49.7 years (SD:15.3), were significantly higher compared to those who had non-fatal outcome, p<0.0001. 9.5% had life time history of DSH attempts. Conclusions: Both, DSP and suicide incidences were very high in KD. However, repetition rates were lesser compared regional values. Prevention of repetition alone may not produce significant impact on prevention of DSH.

Keywords: deliberate self-harm, incidence, repetition, Sri Lanka, suicide

Procedia PDF Downloads 218
168 The MHz Frequency Range EM Induction Device Development and Experimental Study for Low Conductive Objects Detection

Authors: D. Kakulia, L. Shoshiashvili, G. Sapharishvili

Abstract:

The results of the study are related to the direction of plastic mine detection research using electromagnetic induction, the development of appropriate equipment, and the evaluation of expected results. Electromagnetic induction sensing is effectively used in the detection of metal objects in the soil and in the discrimination of unexploded ordnances. Metal objects interact well with a low-frequency alternating magnetic field. Their electromagnetic response can be detected at the low-frequency range even when they are placed in the ground. Detection of plastic things such as plastic mines by electromagnetic induction is associated with difficulties. The interaction of non-conducting bodies or low-conductive objects with a low-frequency alternating magnetic field is very weak. At the high-frequency range where already wave processes take place, the interaction increases. Interactions with other distant objects also increase. A complex interference picture is formed, and extraction of useful information also meets difficulties. Sensing by electromagnetic induction at the intermediate MHz frequency range is the subject of research. The concept of detecting plastic mines in this range can be based on the study of the electromagnetic response of non-conductive cavity in a low-conductivity environment or the detection of small metal components in plastic mines, taking into account constructive features. The detector node based on the amplitude and phase detector 'Analog Devices ad8302' has been developed for experimental studies. The node has two inputs. At one of the inputs, the node receives a sinusoidal signal from the generator, to which a transmitting coil is also connected. The receiver coil is attached to the second input of the node. The additional circuit provides an option to amplify the signal output from the receiver coil by 20 dB. The node has two outputs. The voltages obtained at the output reflect the ratio of the amplitudes and the phase difference of the input harmonic signals. Experimental measurements were performed in different positions of the transmitter and receiver coils at the frequency range 1-20 MHz. Arbitrary/Function Generator Tektronix AFG3052C and the eight-channel high-resolution oscilloscope PICOSCOPE 4824 were used in the experiments. Experimental measurements were also performed with a low-conductive test object. The results of the measurements and comparative analysis show the capabilities of the simple detector node and the prospects for its further development in this direction. The results of the experimental measurements are compared and analyzed with the results of appropriate computer modeling based on the method of auxiliary sources (MAS). The experimental measurements are driven using the MATLAB environment. Acknowledgment -This work was supported by Shota Rustaveli National Science Foundation (SRNSF) (Grant number: NFR 17_523).

Keywords: EM induction sensing, detector, plastic mines, remote sensing

Procedia PDF Downloads 149
167 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis

Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah

Abstract:

Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.

Keywords: apnea, meta-analysis, hypoglossal, stimulation

Procedia PDF Downloads 114
166 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 76
165 Modelling of Air-Cooled Adiabatic Membrane-Based Absorber for Absorption Chillers Using Low Temperature Solar Heat

Authors: M. Venegas, M. De Vega, N. García-Hernando

Abstract:

Absorption cooling chillers have received growing attention over the past few decades as they allow the use of low-grade heat to produce the cooling effect. The combination of this technology with solar thermal energy in the summer period can reduce the electricity consumption peak due to air-conditioning. One of the main components, the absorber, is designed for simultaneous heat and mass transfer. Usually, shell and tubes heat exchangers are used, which are large and heavy. Cooling water from a cooling tower is conventionally used to extract the heat released during the absorption and condensation processes. These are clear inconvenient for the generalization of the absorption technology use, limiting its benefits in the contribution to the reduction in CO2 emissions, particularly for the H2O-LiBr solution which can work with low heat temperature sources as provided by solar panels. In the present work a promising new technology is under study, consisting in the use of membrane contactors in adiabatic microchannel mass exchangers. The configuration here proposed consists in one or several modules (depending on the cooling capacity of the chiller) that contain two vapour channels, separated from the solution by adjacent microporous membranes. The solution is confined in rectangular microchannels. A plastic or synthetic wall separates the solution channels between them. The solution entering the absorber is previously subcooled using ambient air. In this way, the need for a cooling tower is avoided. A model of the configuration proposed is developed based on mass and energy balances and some correlations were selected to predict the heat and mass transfer coefficients. The concentration and temperatures along the channels cannot be explicitly determined from the set of equations obtained. For this reason, the equations were implemented in a computer code using Engineering Equation Solver software, EES™. With the aim of minimizing the absorber volume to reduce the size of absorption cooling chillers, the ratio between the cooling power of the chiller and the absorber volume (R) is calculated. Its variation is shown along the solution channels, allowing its optimization for selected operating conditions. For the case considered the solution channel length is recommended to be lower than 3 cm. Maximum values of R obtained in this work are higher than the ones found in optimized horizontal falling film absorbers using the same solution. Results obtained also show the variation of R and the chiller efficiency (COP) for different ambient temperatures and desorption temperatures typically obtained using flat plate solar collectors. The configuration proposed of adiabatic membrane-based absorber using ambient air to subcool the solution is a good technology to reduce the size of the absorption chillers, allowing the use of low temperature solar heat and avoiding the need for cooling towers.

Keywords: adiabatic absorption, air-cooled, membrane, solar thermal energy

Procedia PDF Downloads 285
164 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 168
163 Differentiated Surgical Treatment of Patients With Nontraumatic Intracerebral Hematomas

Authors: Mansur Agzamov, Valery Bersnev, Natalia Ivanova, Istam Agzamov, Timur Khayrullaev, Yulduz Agzamova

Abstract:

Objectives. Treatment of hypertensive intracerebral hematoma (ICH) is controversial. Advantage of one surgical method on other has not been established. Recent reports suggest a favorable effect of minimally invasive surgery. We conducted a small comparative study of different surgical methods. Methods. We analyzed the result of surgical treatment of 176 patients with intracerebral hematomas at the age from 41 to 78 years. Men were been113 (64.2%), women - 63 (35.8%). Level of consciousness: conscious -18, lethargy -63, stupor –55, moderate coma - 40. All patients on admission and in the dynamics underwent computer tomography (CT) of the brain. ICH was located in the putamen in 87 cases, thalamus in 19, in the mix area in 50, in the lobar area in 20. Ninety seven patients of them had an intraventricular hemorrhage component. The baseline volume of the ICH was measured according to a bedside method of measuring CT intracerebral hematomas volume. Depending on the intervention of the patients were divided into three groups. Group 1 patients, 90 patients, operated open craniotomy. Level of consciousness: conscious-11, lethargy-33, stupor–18, moderate coma -18. The hemorrhage was located in the putamen in 51, thalamus in 3, in the mix area in 25, in the lobar area in 11. Group 2 patients, 22 patients, underwent smaller craniotomy with endoscopic-assisted evacuation. Level of consciousness: conscious-4, lethargy-9, stupor–5, moderate coma -4. The hemorrhage was located in the putamen in 5, thalamus in 15, in the mix area in 2. Group 3 patients, 64 patients, was conducted minimally invasive removal of intracerebral hematomas using the original device (patent of Russian Federation № 65382). The device - funnel cannula - which after the special markings introduced into the hematoma cavity. Level of consciousness: conscious-3, lethargy-21, stupor–22, moderate coma -18. The hemorrhage was located in the putamen in 31, in the mix area in 23, thalamus in 1, in the lobar area in 9. Results of treatment were evaluated by Glasgow outcome scale. Results. The study showed that the results of surgical treatment in three groups depending on the degree of consciousness, the volume and localization of hematoma. In group 1, good recovery observed in 8 cases (8.9%), moderate disability in 22 (24.4%), severe disability - 17 (18.9%), death-43 (47.8%). In group 2, good recovery observed in 7 cases (31.8%), moderate disability in 7 (31.8%), severe disability - 5 (29.7%), death-7 (31.8%). In group 3, good recovery was observed in 9 cases (14.1%), moderate disability-17 (26.5%), severe disability-19 (29.7%), death-19 (29.7%). Conclusions. The method of using cannulae allowed to abandon from open craniotomy of the majority of patients with putaminal hematomas. Minimally invasive technique reduced the postoperative mortality and improves treatment outcomes of these patients.

Keywords: nontraumatic intracerebral hematoma, minimal invasive surgical technique, funnel canula, differentiated surcical treatment

Procedia PDF Downloads 83
162 A Comparison of Three Different Modalities in Improving Oral Hygiene in Adult Orthodontic Patients: An Open-Label Randomized Controlled Trial

Authors: Umair Shoukat Ali, Rashna Hoshang Sukhia, Mubassar Fida

Abstract:

Introduction: The objective of the study was to compare outcomes in terms of Bleeding index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) with video graphics and plaque disclosing tablets (PDT) versus verbal instructions in adult orthodontic patients undergoing fixed appliance treatment (FAT). Materials and Methods: Adult orthodontic patients have recruited from outpatient orthodontic clinics who fulfilled the inclusion criteria and were randomly allocated to three groups i.e., video, PDT, and verbal groups. We included patients undergoing FAT for six months of both genders with all teeth bonded mesial to first molars having no co-morbid conditions such as rheumatic fever and diabetes mellitus. Subjects who had gingivitis as assessed by Bleeding Index (BI), Gingival Index (GI), and Orthodontic Plaque Index (OPI) were recruited. We excluded subjects having > 2 mm of clinical attachment loss, pregnant and lactating females, any history of periodontal therapy within the last six months, and any consumption of antibiotics or anti-inflammatory drugs within the last one month. Pre- and post-interventional measurements were taken at two intervals only for BI, GI, and OPI. The primary outcome of this trial was to evaluate the mean change in the BI, GI, and OPI in the three study groups. A computer-generated randomization list was used to allocate subjects to one of the three study groups using a random permuted block sampling of 6 and 9 to randomize the samples. No blinding of the investigator or the participants was performed. Results: A total of 99 subjects were assessed for eligibility, out of which 96 participants were randomized as three of the participants declined to be part of this trial. This resulted in an equal number of participants (32) that were analyzed in all three groups. The mean change in the oral hygiene indices score was assessed, and we found no statistically significant difference among the three interventional groups. Pre- and post-interventional results showed statistically significant improvement in the oral hygiene indices for the video and PDT groups. No statistically significant difference for age, gender, and education level on oral hygiene indices were found. Simple linear regression showed that the video group produced significantly higher mean OPI change as compared to other groups. No harm was observed during the trial. Conclusions: Visual aids performed better as compared to the verbal group. Gender, age, and education level had no statistically significant impact on the oral hygiene indices. Longer follow-ups will be required to see the long-term effects of these interventions. Trial Registration: NCT04386421 Funding: Aga Khan University and Hospital (URC 183022)

Keywords: oral hygiene, orthodontic treatment, adults, randomized clinical trial

Procedia PDF Downloads 118
161 Phylogenetic Analysis of Georgian Populations of Potato Cyst Nematodes Globodera Rostochiensis

Authors: Dali Gaganidze, Ekaterine Abashidze

Abstract:

Potato is one of the main agricultural crops in Georgia. Georgia produces early and late potato varieties in almost all regions. In traditional potato growing regions (Svaneti, Samckhet javaheti and Tsalka), the yield is higher than 30-35 t/ha. Among the plant pests that limit potato production and quality, the potato cyst nematodes (PCN) are harmful around the world. Yield losses caused by PCN are estimated up to 30%. Rout surveys conducted in two geographically distinct regions of Georgia producing potatoes - Samtskhe - Javakheti and Svaneti revealed potato cyst nematode Globodera rostochiensi. The aim of the study was the Phylogenetic analyses of Globodera rostochiensi revealed in Georgia by the amplification and sequencing of 28S gen in the D3 region and intergenic ITS1-15.8S-ITS2 region. Identification of all the samples from the two Globodera populations (Samtskhe - Javakheti and Svaneti), i.e., G. rostochiensis (20 isolates) were confirmed by conventional multiplex PCR with ITS 5 universal and PITSp4, PITSr3 specific primers of the cyst nematodes’ (G. pallida, G. rostochiensis). The size of PCR fragment 434 bp confirms that PCN samples from two populations, Samtskhe- Javakheti and Svaneti, belong to G. rostochiensi . The ITS1–5.8S-ITS2 regions were amplified using prime pairs: rDNA1 ( 5’ -TTGATTACGTCCCTGCCCTTT-3’ and rDNA2( 5’ TTTCACTCGCCGTTACTAAGG-3’), D3 expansion regions were amplified using primer pairs: D3A (5’ GACCCCTCTTGAAACACGGA-3’) and D3B (5’-TCGGAAGGAACCAGCTACTA-3’. PCR products of each region were cleaned up and sequenced using an ABI 3500xL Genetic Analyzer. Obtained sequencing results were analyzed by computer program BLASTN (https://blast.ncbi.nlm.nih.gov/Blast.cg). Phylogenetic analyses to resolve the relationships between the isolates were conducted in MEGA7 using both distance- and character-based methods. Based on analysis of G.rostochiensis isolate`s D3 expansion regions are grouped in three major clades (A, B and C) on the phylogenetic tree. Clade A is divided into three subclades; clade C is divided into two subclades. Isolates from the Samtckhet-javakheti population are in subclade 1 of clade A and isolates in subclade 1 of clade C. Isolates) from Svaneti populations are in subclade 2 of clade A and in clad B. In Clade C, subclade two is presented by three isolates from Svaneti and by one isolate (GL17) from Samckhet-Javakheti. . Based on analysis of G.rostochiensis isolate`s ITS1–5.8S-ITS2 regions are grouped in two main clades, the first contained 20 Georgian isolates of Globodera rostochiensis from Svaneti . The second clade contained 15 isolates of Globodera rostochiensis from Samckhet javakheti. Our investigation showed of high genetic variation of D3 and ITS1–5.8S-ITS2 region of rDNA of the isolates of G. rostochiensis from different geographic origins (Svameti, Samckhet-Javakheti) of Georgia. Acknowledgement: The research has been supported by the Shota Rustaveli National Scientific Foundation of Georgia : Project # FR17_235

Keywords: globodera rostochiensi, PCR, phylogenetic tree, sequencing

Procedia PDF Downloads 195
160 Investigation of a Technology Enabled Model of Home Care: the eShift Model of Palliative Care

Authors: L. Donelle, S. Regan, R. Booth, M. Kerr, J. McMurray, D. Fitzsimmons

Abstract:

Palliative home health care provision within the Canadian context is challenged by: (i) a shortage of registered nurses (RN) and RNs with palliative care expertise, (ii) an aging population, (iii) reliance on unpaid family caregivers to sustain home care services with limited support to conduct this ‘care work’, (iv) a model of healthcare that assumes client self-care, and (v) competing economic priorities. In response, an interprofessional team of service provider organizations, a software/technology provider, and health care providers developed and implemented a technology-enabled model of home care, the eShift model of palliative home care (eShift). The eShift model combines communication and documentation technology with non-traditional utilization of health human resources to meet patient needs for palliative care in the home. The purpose of this study was to investigate the structure, processes, and outcomes of the eShift model of care. Methodology: Guided by Donebedian’s evaluation framework for health care, this qualitative-descriptive study investigated the structure, processes, and outcomes care of the eShift model of palliative home care. Interviews and focus groups were conducted with health care providers (n= 45), decision-makers (n=13), technology providers (n=3) and family care givers (n=8). Interviews were recorded, transcribed, and a deductive analysis of transcripts was conducted. Study Findings (1) Structure: The eShift model consists of a remotely-situated RN using technology to direct care provision virtually to patients in their home. The remote RN is connected virtually to a health technician (an unregulated care provider) in the patient’s home using real-time communication. The health technician uses a smartphone modified with the eShift application and communicates with the RN who uses a computer with the eShift application/dashboard. Documentation and communication about patient observations and care activities occur in the eShift portal. The RN is typically accountable for four to six health technicians and patients over an 8-hour shift. The technology provider was identified as an important member of the healthcare team. Other members of the team include family members, care coordinators, nurse practitioners, physicians, and allied health. (2) Processes: Conventionally, patient needs are the focus of care; however within eShift, the patient and the family caregiver were the focus of care. Enhanced medication administration was seen as one of the most important processes, and family caregivers reported high satisfaction with the care provided. There was perceived enhanced teamwork among health care providers. (3) Outcomes: Patients were able to die at home. The eShift model enabled consistency and continuity of care, and effective management of patient symptoms and caregiver respite. Conclusion: More than a technology solution, the eShift model of care was viewed as transforming home care practice and an innovative way to resolve the shortage of palliative care nurses within home care.

Keywords: palliative home care, health information technology, patient-centred care, interprofessional health care team

Procedia PDF Downloads 417
159 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 146
158 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials

Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova

Abstract:

Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).

Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system

Procedia PDF Downloads 406
157 Beginning Physics Experiments Class Using Multi Media in National University of Laos

Authors: T. Nagata, S. Xaphakdy, P. Souvannavong, P. Chanthamaly, K. Sithavong, C. H. Lee, S. Phommathat, V. Srithilat, P. Sengdala, B. Phetarnousone, B. Siharath, X. Chemcheng, T. Yamaguchi, A. Suenaga, S. Kashima

Abstract:

National University of Laos (NUOL) requested Japan International Cooperation Agency (JICA) volunteers to begin a physics experiments class using multi media. However, there are issues. NUOL had no physics experiment class, no space for physics experiments, experiment materials were not used for many years and were scattered in various places, and there is no projector and laptop computer in the unit. This raised the question: How do authors begin the physics experiments class using multimedia? To solve this problem, the JICA took some steps, took stock of what was available and reviewed the syllabus. The JICA then revised the experiment materials to assess what was available and then developed textbooks for experiments using them; however, the question remained, what about the multimedia component of the course? Next, the JICA reviewed Physics teacher Pavy Souvannavong’s YouTube channel, where he and his students upload video reports of their physics classes at NUOL using their smartphones. While they use multi-media, almost all the videos recorded were of class presentations. To improve the multimedia style, authors edited the videos in the style of another YouTube channel, “Science for Lao,” which is a science education group made up of Japan Overseas Cooperation Volunteers (JOCV) in Laos. They created the channel to enhance science education in Laos, and hold regular monthly meetings in the capital, Vientiane, and at teacher training colleges in the country. They edit the video clips in three parts, which are the materials and procedures part including pictures, practice footage of the experiment part, and then the result and conclusion part. Then students perform experiments and prepare for presentation by following the videos. The revised experiment presentation reports use PowerPoint presentations, material pictures and experiment video clips. As for providing textbooks and submitting reports, the students use the e-Learning system of “Moodle” of the Information Technology Center in Dongdok campus of NUOL. The Korean International Cooperation Agency (KOICA) donated those facilities. The authors have passed the process of the revised materials, developed textbooks, the PowerPoint slides presented by students, downloaded textbooks and uploaded reports, to begin the physics experiments class using multimedia. This is the practice research report for beginning a physics experiments class using multimedia in the physics unit at the Department of Natural Science, Faculty of Education, at the NUOL.

Keywords: NUOL, JICA, KOICA, physics experiment materials, smartphone, Moodle, IT center, Science for Lao

Procedia PDF Downloads 352
156 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 137
155 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 155
154 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 442
153 Legal Provisions on Child Pornography in Bangladesh: A Comparative Study on South Asian Landscape

Authors: Monira Nazmi Jahan, Nusrat Jahan Nishat

Abstract:

'Child Pornography' is a sex crime that portrays illegal images and videos of a minor over the Internet and now has become a social concern with the increase of commission of this crime. The major objective of this paper is to identify and examine the laws relating to child pornography in Bangladesh and to compare this with other South Asian countries. In Bangladesh to prosecute under child pornography, provisions have been made in ‘Digital Security Act, 2018’ where it has been defined as involving child in areas of child sexuality or in sexuality and whoever commits the crime will be punished for 10 years imprisonment or 10 lac taka fine. In India, the crime is dealt with ‘The Protection of Children from Sexual Offences Act, 2012’ (POSCO) where the offenders for commission of this crime has been divided separately and has provision for punishments starting from three years to rigorous life imprisonment and shall also be liable to fine. In the Maldives, there is ‘Special Provisions Act to Deal with Child Sex Abuse Offenders, Act number 12/2009’. In this act it has been provided that a person is guilty of such an act if intentionally runs child prostitution, involves child in the creation of pornography or displays child’s sexual organ in pornography then shall be punished between 20 to 25 years of imprisonment. Nepal prosecutes this crime through ‘Act Relating to Children, 2018’ and the conviction of using child in prostitution or sexual services is imprisonment up to fifteen years and fine up to one hundred fifty thousand rupees. In Pakistan, child pornography is prosecuted with ‘Pakistan Penal Code Child Abuse Amendment Act, 2016’. This provides that one is guilty of this offence if he involves child with or without consent in such activities. It provides punishment for two to seven years of imprisonment or fine from two hundred thousand to seven hundred thousand rupees. In Bhutan child pornography is not explicitly addressed under the municipal laws. The Penal Code of Bhutan penalizes all kinds of pornography including child pornography under the provisions of computer pornography and the offence shall be a misdemeanor. Child Pornography is also prohibited under the ‘Child Care and Protection Act’. In Sri Lanka, ‘The Penal Code’ de facto criminalizes child prohibition and has a penalty of two to ten years and may also be liable to fine. The most shocking scenario exists in Afghanistan. There is no specific law for the protection of children from pornography, whereas this serious crime is present there. This paper will be conducted through a qualitative research method that is, the primary sources will be laws, and secondary sources will be journal articles and newspapers. The conclusion that can be drawn is except Afghanistan all other South Asian countries have laws for controlling this crime but still have loopholes. India has the most amended provisions. Nepal has no provision for fine, and Bhutan does not mention any specific punishment. Bangladesh compared to these countries, has a good piece of law; however, it also has space to broaden the laws for controlling child pornography.

Keywords: child abuse, child pornography, life imprisonment, penal code, South Asian countries

Procedia PDF Downloads 229
152 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 232
151 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 126
150 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 283
149 Geospatial Technologies in Support of Civic Engagement and Cultural Heritage: Lessons Learned from Three Participatory Planning Workshops for Involving Local Communities in the Development of Sustainable Tourism Practices in Latiano, Brindisi

Authors: Mark Opmeer

Abstract:

The fruitful relationship between cultural heritage and digital technology is evident. Due to the development of user-friendly software, an increasing amount of heritage scholars use ict for their research activities. As a result, the implementation of information technology for heritage planning has become a research objective in itself. During the last decades, we have witnessed a growing debate and literature about the importance of computer technologies for the field of cultural heritage and ecotourism. Indeed, implementing digital technology in support of these domains can be very fruitful for one’s research practice. However, due to the rapid development of new software scholars may find it challenging to use these innovations in an appropriate way. As such, this contribution seeks to explore the interplay between geospatial technologies (geo-ict), civic engagement and cultural heritage and tourism. In this article, we discuss our findings on the use of geo-ict in support of civic participation, cultural heritage and sustainable tourism development in the southern Italian district of Brindisi. In the city of Latiano, three workshops were organized that involved local members of the community to distinguish and discuss interesting points of interests (POI’s) which represent the cultural significance and identity of the area. During the first workshop, a so called mappa della comunità was created on a touch table with collaborative mapping software, that allowed the participators to highlight potential destinations for tourist purposes. Furthermore, two heritage-based itineraries along a selection of identified POI’s was created to make the region attractive for recreants and tourists. These heritage-based itineraries reflect the communities’ ideas about the cultural identity of the region. Both trails were subsequently implemented in a dedicated mobile application (app) and was evaluated using a mixed-method approach with the members of the community during the second workshop. In the final workshop, the findings of the collaboration, the heritage trails and the app was evaluated with all participants. Based on our conclusions, we argue that geospatial technologies have a significant potential for involving local communities in heritage planning and tourism development. The participants of the workshops found it increasingly engaging to share their ideas and knowledge using the digital map of the touch table. Secondly, the use of a mobile application as instrument to test the heritage-based itineraries in the field was broadly considered as fun and beneficial for enhancing community awareness and participation in local heritage. The app furthermore stimulated the communities’ awareness of the added value of geospatial technologies for sustainable tourism development in the area. We conclude this article with a number of recommendations in order to provide a best practice for organizing heritage workshops with similar objectives.

Keywords: civic engagement, geospatial technologies, tourism development, cultural heritage

Procedia PDF Downloads 287
148 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 291
147 The Impact of Online Learning on Visual Learners

Authors: Ani Demetrashvili

Abstract:

As online learning continues to reshape the landscape of education, questions arise regarding its efficacy for diverse learning styles, particularly for visual learners. This abstract delves into the impact of online learning on visual learners, exploring how digital mediums influence their educational experience and how educational platforms can be optimized to cater to their needs. Visual learners comprise a significant portion of the student population, characterized by their preference for visual aids such as diagrams, charts, and videos to comprehend and retain information. Traditional classroom settings often struggle to accommodate these learners adequately, relying heavily on auditory and written forms of instruction. The advent of online learning presents both opportunities and challenges in addressing the needs of visual learners. Online learning platforms offer a plethora of multimedia resources, including interactive simulations, virtual labs, and video lectures, which align closely with the preferences of visual learners. These platforms have the potential to enhance engagement, comprehension, and retention by presenting information in visually stimulating formats. However, the effectiveness of online learning for visual learners hinges on various factors, including the design of learning materials, user interface, and instructional strategies. Research into the impact of online learning on visual learners encompasses a multidisciplinary approach, drawing from fields such as cognitive psychology, education, and human-computer interaction. Studies employ qualitative and quantitative methods to assess visual learners' preferences, cognitive processes, and learning outcomes in online environments. Surveys, interviews, and observational studies provide insights into learners' preferences for specific types of multimedia content and interactive features. Cognitive tasks, such as memory recall and concept mapping, shed light on the cognitive mechanisms underlying learning in digital settings. Eye-tracking studies offer valuable data on attentional patterns and information processing during online learning activities. The findings from research on the impact of online learning on visual learners have significant implications for educational practice and technology design. Educators and instructional designers can use insights from this research to create more engaging and effective learning materials for visual learners. Strategies such as incorporating visual cues, providing interactive activities, and scaffolding complex concepts with multimedia resources can enhance the learning experience for visual learners in online environments. Moreover, online learning platforms can leverage the findings to improve their user interface and features, making them more accessible and inclusive for visual learners. Customization options, adaptive learning algorithms, and personalized recommendations based on learners' preferences and performance can enhance the usability and effectiveness of online platforms for visual learners.

Keywords: online learning, visual learners, digital education, technology in learning

Procedia PDF Downloads 38
146 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 174
145 Bank Failures: A Question of Leadership

Authors: Alison L. Miles

Abstract:

Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.

Keywords: banking, financial crisis, leadership, risk

Procedia PDF Downloads 318
144 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 213
143 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 239
142 Ways to Prevent Increased Wear of the Drive Box Parts and the Central Drive of the Civil Aviation Turbo Engine Based on Tribology

Authors: Liudmila Shabalinskaya, Victor Golovanov, Liudmila Milinis, Sergey Loponos, Alexander Maslov, D. O. Frolov

Abstract:

The work is devoted to the rapid laboratory diagnosis of the condition of aircraft friction units, based on the application of the nondestructive testing method by analyzing the parameters of wear particles, or tribodiagnostics. The most important task of tribodiagnostics is to develop recommendations for the selection of more advanced designs, materials and lubricants based on data on wear processes for increasing the life and ensuring the safety of the operation of machines and mechanisms. The object of tribodiagnostics in this work are the tooth gears of the central drive and the gearboxes of the gas turbine engine of the civil aviation PS-90A type, in which rolling friction and sliding friction with slip occur. The main criterion for evaluating the technical state of lubricated friction units of a gas turbine engine is the intensity and rate of wear of the friction surfaces of the friction unit parts. When the engine is running, oil samples are taken and the state of the friction surfaces is evaluated according to the parameters of the wear particles contained in the oil sample, which carry important and detailed information about the wear processes in the engine transmission units. The parameters carrying this information include the concentration of wear particles and metals in the oil, the dispersion composition, the shape, the size ratio and the number of particles, the state of their surfaces, the presence in the oil of various mechanical impurities of non-metallic origin. Such a morphological analysis of wear particles has been introduced into the order of monitoring the status and diagnostics of various aircraft engines, including a gas turbine engine, since the type of wear characteristic of the central drive and the drive box is surface fatigue wear and the beginning of its development, accompanied by the formation of microcracks, leads to the formation of spherical, up to 10 μm in size, and in the aftermath of flocculent particles measuring 20-200 μm in size. Tribodiagnostics using the morphological analysis of wear particles includes the following techniques: ferrography, filtering, and computer analysis of the classification and counting of wear particles. Based on the analysis of several series of oil samples taken from the drive box of the engine during their operating time, a study was carried out of the processes of wear kinetics. Based on the results of the study and comparing the series of criteria for tribodiagnostics, wear state ratings and statistics of the results of morphological analysis, norms for the normal operating regime were developed. The study allowed to develop levels of wear state for friction surfaces of gearing and a 10-point rating system for estimating the likelihood of the occurrence of an increased wear mode and, accordingly, prevention of engine failures in flight.

Keywords: aviation, box of drives, morphological analysis, tribodiagnostics, tribology, ferrography, filtering, wear particle

Procedia PDF Downloads 259